Skip to content

Commit 090cc3b

Browse files
authored
Merge pull request #51 from meleksabit/stage
chore: refactor helm charts + add screenshots
2 parents ff75dfa + 9010e89 commit 090cc3b

File tree

24 files changed

+470
-377
lines changed

24 files changed

+470
-377
lines changed

README.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
[![PR Title Check](https://github.com/meleksabit/blockchain-ai-security-platform-terraform-aws/actions/workflows/pr-title-linter.yml/badge.svg)](https://github.com/meleksabit/blockchain-ai-security-platform-terraform-aws/actions/workflows/pr-title-linter.yml) [![GitHub Release](https://img.shields.io/github/v/release/meleksabit/blockchain-ai-security-platform-terraform-aws)](https://github.com/meleksabit/blockchain-ai-security-platform-terraform-aws/releases)
1010

11-
### An ֎🇦🇮-powered security platform for detecting anomalies in blockchain transactions, built with Terraform <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/terraform.png" alt="Terraform" title="Terraform"/> for AWS <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/aws.png" alt="AWS" title="AWS"/> infrastructure, Helm <img height="32" width="32" src="https://cdn.simpleicons.org/helm" /> for Kubernetes <img height="32" width="32" src="https://cdn.simpleicons.org/kubernetes" /> deployments, and a CI/CD <img height="32" width="32" src="https://cdn.simpleicons.org/jenkins" /> pipeline. The platform integrates AI agents <img height="32" width="32" src="https://cdn.simpleicons.org/openai" />, Go <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/go.png" alt="Go" title="Go"/> microservices, RDS <img height="32" width="32" src="https://cdn.simpleicons.org/amazonrds" />, and containerized deployments for a robust DevSecOps solution.
11+
### An ֎🇦🇮-powered security platform for detecting anomalies in blockchain transactions, built with Terraform <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/terraform.png" alt="Terraform" title="Terraform"/> for AWS <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/aws.png" alt="AWS" title="AWS"/> infrastructure, Helm <img height="32" width="32" src="https://cdn.simpleicons.org/helm" /> for Kubernetes <img height="32" width="32" src="https://cdn.simpleicons.org/kubernetes" /> deployments, and a CI/CD <img height="32" width="32" src="https://cdn.simpleicons.org/jenkins" /> pipeline. The platform integrates AI agents <img height="32" width="32" src="https://cdn.simpleicons.org/openai" />, Go <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/go.png" alt="Go" title="Go"/> microservices, RDS, and containerized deployments for a robust DevSecOps solution.
1212

1313
## Table of Contents
1414
- [Implementation Overview](#implementation-overview)
@@ -36,6 +36,9 @@
3636
- **AWS**: Deployed via Terraform Cloud.
3737
- **Components**:
3838
- <img width="33" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/python.png" alt="Python" title="Python"/> **AI Agent**: Core anomaly detection service (port 8000).
39+
<p align="center">
40+
<img src="Screenshot 2025-07-15 183315.png" width="733"/>
41+
</p>
3942
- <img height="32" width="32" src="https://cdn.simpleicons.org/go" /> **Go Microservices**:
4043
- `blockchain-monitor`: Tracks transactions (port 8081).
4144
- `anomaly-detector`: Analyzes anomalies (port 8082).
@@ -81,7 +84,7 @@
8184

8285
## 📝✅Prerequisites
8386

84-
1. <img height="32" width="32" src="https://cdn.simpleicons.org/amazonwebservices" /> **AWS Account**:
87+
1. <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/aws.png" alt="AWS" title="AWS"/> **AWS Account**:
8588
- Active account with IAM user access keys (EKS, EC2, ELB, ECR, IAM, S3, RDS permissions).
8689
- Region: `eu-central-1`.
8790

@@ -419,6 +422,11 @@ Obtain an Infura API key by creating an account at <a href="https://infura.io">i
419422
curl http://<blockchain-monitor-load-balancer>:8081/health
420423
curl http://<ai-agent-load-balancer>:8000/health
421424
```
425+
426+
<p align="center">
427+
<img src="Screenshot 2025-07-12 215045.png" width="733"/>
428+
</p>
429+
422430
- Ensure the `network` field matches the configured value.
423431

424432
4. **IAM Role (`TerraformCloudRole`)**:
@@ -583,6 +591,10 @@ Obtain an Infura API key by creating an account at <a href="https://infura.io">i
583591
```
584592
- Use the LoadBalancer URL (port 8083).
585593

594+
<p align="center">
595+
<img src="Screenshot 2025-07-12 192906.png" width="733"/>
596+
</p>
597+
586598
## 🏗️🧱📐Infrastructure Details
587599
Infrastructure is managed in the `terraform/` folder:
588600
- **Modules**: `eks`, `alb`, `s3`, `iam`, `network`, `vault`, `rds`, `vault`.

Screenshot 2025-07-12 192906.png

54.2 KB
Loading

Screenshot 2025-07-12 215045.png

28.5 KB
Loading

Screenshot 2025-07-15 183315.png

656 KB
Loading

ai-agent/Dockerfile

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,11 @@ COPY --from=builder /usr/local/lib/python3.12/site-packages/ /usr/local/lib/pyth
1212
COPY --from=builder /usr/local/bin/ /usr/local/bin/
1313
COPY ai_agent.py .
1414
RUN useradd -m appuser && \
15+
mkdir -p /home/appuser/app/model_cache && \
16+
chown -R appuser:appuser /home/appuser/app/model_cache && \
1517
mkdir -p /home/appuser/.cache && \
16-
chown -R appuser:appuser /home/appuser
18+
chown -R appuser:appuser /home/appuser/.cache
1719
USER appuser
1820
HEALTHCHECK --interval=30s --timeout=3s \
1921
CMD curl -f http://localhost:8000/health || exit 1
20-
CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-w", "4", "-b", "0.0.0.0:8000", "ai_agent:app"]
22+
CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-w", "1", "-b", "0.0.0.0:8000", "ai_agent:app"]

ai-agent/ai_agent.py

Lines changed: 52 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
class RedactingFormatter(logging.Formatter):
1919
def format(self, record):
2020
msg = super().format(record)
21-
infura_key = os.environ.get("INFURA_API_KEY", "unknown") # Check every time
21+
infura_key = os.environ.get("INFURA_API_KEY", "unknown")
2222
return re.sub(rf"https://(sepolia|ropsten)\.infura\.io/v3/{infura_key}", "https://[network].infura.io/v3/[REDACTED]", msg)
2323

2424
# Configure Logging
@@ -38,32 +38,37 @@ def format(self, record):
3838
class AIModel:
3939
instance = None
4040
model_loaded = False
41+
tokenizer = None
42+
model = None
4143

4244
@classmethod
4345
def get_instance(cls):
4446
if cls.instance is None:
4547
cls.instance = cls()
4648
return cls.instance
4749

48-
def __init__(self):
49-
pass
50-
5150
async def load_model(self):
5251
cache_dir = "./model_cache"
5352
try:
54-
logger.info("Loading AI Model in background...")
55-
self.tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir)
56-
self.model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir)
53+
logger.info("🏁Starting AI model loading at %s", time.ctime())
54+
start_time = time.time()
55+
# Check if cache exists and contains config.json
56+
cache_path = os.path.join(cache_dir, "config.json")
57+
local_only = os.path.exists(cache_path)
58+
self.tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir, local_files_only=local_only)
59+
self.model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir, local_files_only=local_only)
60+
load_duration = time.time() - start_time
5761
self.model_loaded = True
58-
logger.info("AI Model Loaded Successfully!")
62+
logger.info("⏳⌛AI model loaded in %.2f seconds", load_duration)
63+
logger.info("🚀AI Model Loaded Successfully!✅")
5964
except Exception as e:
60-
logger.error(f"Error loading AI model: {e}")
65+
logger.error(f"🚧 ⚠️Error loading AI model: {e}⚠️ 🚧")
6166
self.model_loaded = False
6267
raise
6368

6469
def analyze(self, tx_data, web3):
6570
if not self.model_loaded:
66-
raise RuntimeError("AI Model not loaded yet")
71+
raise RuntimeError("AI Model not loaded yet")
6772
text = f"TX: {tx_data['from']} -> {tx_data['to']}, Amount: {web3.from_wei(tx_data['value'], 'ether')} ETH, Gas: {tx_data['gas']}"
6873
inputs = self.tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
6974
with torch.no_grad():
@@ -73,37 +78,35 @@ def analyze(self, tx_data, web3):
7378
if tx_data["value"] > historical_avg_value * 5:
7479
anomaly_score += 0.2
7580
if anomaly_score > 0.7:
76-
return f"High Anomaly Score: {anomaly_score:.2f} -> Potential Risk!"
81+
return f"/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿̿ ̿̿💥High Anomaly Score: {anomaly_score:.2f} -> Potential Risk!☣️☢️"
7782
elif anomaly_score > 0.5:
78-
return f"Medium Anomaly Score: {anomaly_score:.2f} -> Needs Review"
83+
return f" 🕵️ Medium Anomaly Score: {anomaly_score:.2f} -> Needs Review👀"
7984
else:
80-
return f"Normal Transaction (Score: {anomaly_score:.2f})"
85+
return f"👌Normal Transaction (Score: {anomaly_score:.2f})"
8186

8287
# Ensure hf_xet is installed
8388
def ensure_hf_xet():
8489
try:
8590
import hf_xet
86-
logger.info("hf_xet package is already installed")
91+
logger.info("📦hf_xet package is already installed")
8792
except ImportError:
88-
logger.warning("hf_xet not installed in image, expected pre-installation. Falling back to HTTP download.")
93+
logger.warning("🚨hf_xet not installed in image, expected pre-installation. Falling back to HTTP download.⚠️")
8994

9095
# Health endpoint
9196
@app.get("/health")
9297
async def health_check():
9398
try:
9499
web3 = connect_web3()
95100
ai_model = AIModel.get_instance()
96-
return {
97-
"status": "healthy" if ai_model.model_loaded else "starting",
98-
"web3_connected": web3.is_connected(),
99-
"model_loaded": ai_model.model_loaded,
100-
"network": NETWORK
101-
}
101+
if ai_model.model_loaded:
102+
return {"status": "🌾💚healthy", "web3_connected": web3.is_connected(), "model_loaded": True, "network": NETWORK}
103+
else:
104+
return {"status": "⏳⌛loading", "web3_connected": web3.is_connected(), "model_loaded": False, "network": NETWORK}, 200
102105
except HTTPException as e:
103106
raise e
104107
except Exception as e:
105-
logger.error(f"Health check failed: {e}")
106-
return {"status": "unhealthy", "error": "Health check failed due to an internal error"}
108+
logger.error(f"⚠️👎Health check failed: {e}")
109+
return {"ـــــــــــــــﮩ٨ـ❤️️status": "☣️☠️unhealthy", "⚠️error⚠️": "An internal error has occurred."}, 503
107110

108111
# Vault client setup
109112
@lru_cache(maxsize=1)
@@ -113,7 +116,7 @@ def get_vault_client():
113116
client.token = os.environ.get("VAULT_AUTH_TOKEN")
114117
if not client.is_authenticated():
115118
raise Exception("Vault authentication failed")
116-
logger.info("Vault client authenticated successfully")
119+
logger.info("🔐Vault client authenticated successfully")
117120
return client
118121

119122
# Secrets retrieval
@@ -124,13 +127,13 @@ def get_infura_key():
124127
secret = client.secrets.kv.v2.read_secret_version(path="infura", mount_point="secret")
125128
api_key = secret["data"]["data"]["api_key"]
126129
if api_key.startswith("https://"):
127-
logger.warning("Infura key from Vault appears to be a full URL - extracting key")
130+
logger.warning("⚠︎ ⚡︎Infura key from Vault appears to be a full URL - extracting key")
128131
api_key = api_key.split("/")[-1]
129132
os.environ["INFURA_API_KEY"] = api_key
130-
logger.info("Infura key retrieved from Vault")
133+
logger.info("🔑Infura key retrieved from Vault")
131134
return api_key
132135
except Exception as e:
133-
logger.error(f"Vault Infura Error: {e}")
136+
logger.error(f"Vault Infura Error: {e}")
134137
raise
135138

136139
@retry(
@@ -145,22 +148,22 @@ def connect_web3(network=NETWORK):
145148
infura_key = get_infura_key()
146149
url = f"https://{network}.infura.io/v3/{infura_key}"
147150
else:
148-
logger.error(f"Unsupported network: {network}")
149-
raise ValueError(f"Unsupported network: {network}")
151+
logger.error(f"💀💻Unsupported network: {network}")
152+
raise ValueError(f"💀💻Unsupported network: {network}")
150153

151154
try:
152155
web3 = Web3(Web3.HTTPProvider(url))
153156
connected = web3.is_connected()
154157
if not connected:
155-
logger.error(f"Web3 connection failed - {network} not reachable (key redacted)")
158+
logger.error(f"🔗💔Web3 connection failed - {network} not reachable (key redacted)")
156159
raise HTTPException(status_code=503, detail=f"Failed to connect to {network}")
157-
logger.info(f"Connected to {network} blockchain!")
160+
logger.info(f"🔗Connected to {network} blockchain!")
158161
return web3
159162
except HTTPError as e:
160-
logger.error(f"HTTP error connecting to {network}: {e} (key redacted)")
163+
logger.error(f"🌐❌HTTP error connecting to {network}: {e} (key redacted)")
161164
raise
162165
except Exception as e:
163-
logger.error(f"Web3 connection error for {network}: {e} (key redacted)")
166+
logger.error(f"🌐🔗⛓️Web3 connection error for {network}: {e} (key redacted)")
164167
raise HTTPException(status_code=503, detail=f"{network} connection unavailable")
165168

166169
# Block caching
@@ -171,11 +174,11 @@ def get_latest_block_data(web3):
171174
current_time = time.time()
172175
latest_block = web3.eth.block_number
173176
if latest_block in block_cache and (current_time - block_cache[latest_block]["timestamp"]) < CACHE_TTL:
174-
logger.info(f"Using cached block {latest_block}")
177+
logger.info(f"🧹🔗Using cached block {latest_block}")
175178
return block_cache[latest_block]["data"]
176179
block_data = web3.eth.get_block(latest_block, full_transactions=True)
177180
block_cache[latest_block] = {"data": block_data, "timestamp": current_time}
178-
logger.info(f"Fetched new block {latest_block}")
181+
logger.info(f"🐕🦴Fetched new block {latest_block}")
179182
return block_data
180183

181184
# Historical data
@@ -185,10 +188,10 @@ async def fetch_historical_blocks(web3, start_block, num_blocks):
185188
try:
186189
block = web3.eth.get_block(block_num, full_transactions=True)
187190
historical_data.append(block)
188-
logger.info(f"Fetched historical block {block_num}")
191+
logger.info(f"📜🏛️🏺Fetched historical block {block_num}")
189192
await asyncio.sleep(1) # Avoid rate limits
190193
except HTTPError as e:
191-
logger.error(f"Infura rate limit hit: {e}")
194+
logger.error(f"🛑✋Infura rate limit hit: {e}")
192195
break
193196
return historical_data
194197

@@ -208,12 +211,12 @@ async def analyze_transaction(tx: Transaction):
208211
tx_data = {"from": tx.from_address, "to": tx.to_address, "value": int(tx.value), "gas": tx.gas}
209212
ai_model = AIModel.get_instance()
210213
result = ai_model.analyze(tx_data, web3)
211-
logger.info(f"Transaction analyzed: {tx.from_address} -> {tx.to_address} | {result}")
214+
logger.info(f"🧐Transaction analyzed: {tx.from_address} -> {tx.to_address} | {result}")
212215
return {"result": result}
213216
except HTTPException as e:
214217
raise e
215218
except Exception as e:
216-
logger.error(f"Analyze failed: {e}")
219+
logger.error(f"❌📉Analyze failed: {e}")
217220
raise HTTPException(status_code=500, detail="Internal server error during analysis")
218221

219222
@app.on_event("startup")
@@ -222,30 +225,32 @@ async def startup_event():
222225
ensure_hf_xet() # Ensure hf_xet is installed
223226
web3 = connect_web3()
224227
ai_model = AIModel.get_instance()
225-
logger.info("Starting blockchain polling and historical fetch in background")
228+
logger.info("1️⃣🚀Initiating ai-agent service")
229+
asyncio.create_task(ai_model.load_model()) # Load model in background
230+
logger.info("֎🇦🇮 ai-agent service ready")
231+
logger.info("🏁Starting blockchain polling and historical fetch in background")
226232
asyncio.create_task(poll_blockchain(web3))
227233
asyncio.create_task(fetch_historical_blocks(web3, web3.eth.block_number - 1000, 1000))
228-
asyncio.create_task(ai_model.load_model()) # Load model in background
229-
logger.info("Startup tasks scheduled")
234+
logger.info("🕘🗓️Startup tasks scheduled")
230235
except HTTPException as e:
231-
logger.error(f"Startup failed with HTTP exception: {e.detail}")
236+
logger.error(f"🔴Startup failed with HTTP exception: {e.detail}")
232237
except Exception as e:
233-
logger.error(f"Startup failed: {e}")
238+
logger.error(f"🔴Startup failed: {e}")
234239

235240
async def poll_blockchain(web3):
236241
ai_model = AIModel.get_instance()
237242
while not ai_model.model_loaded:
238-
logger.info("Waiting for AI model to load before polling...")
243+
logger.info("...⏳Waiting for AI model to load before polling...")
239244
await asyncio.sleep(5)
240245
while True:
241246
try:
242247
block_data = get_latest_block_data(web3)
243248
for tx in block_data["transactions"]:
244249
result = ai_model.analyze(tx, web3)
245250
if "High" in result or "Medium" in result:
246-
logger.warning(f"Anomaly detected in block {block_data['number']}: {result}")
251+
logger.warning(f"/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿̿ ̿̿💥Anomaly detected in block {block_data['number']}: {result}")
247252
except Exception as e:
248-
logger.error(f"Polling error: {e}")
253+
logger.error(f"🔴🗳️Polling error: {e}")
249254
await asyncio.sleep(10)
250255

251256
if __name__ == "__main__":

docker-compose.yml

Lines changed: 24 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
services:
22
vault:
3-
image: hashicorp/vault:1.19.0
3+
image: hashicorp/vault:1.20.0
44
container_name: vault
55
ports:
66
- "8200:8200"
77
environment:
8-
- VAULT_DEV_ROOT_TOKEN_ID=myroot
8+
- VAULT_DEV_ROOT_TOKEN_ID=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
99
- VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200
1010
command: server -dev
1111
cap_add:
@@ -15,6 +15,8 @@ services:
1515
interval: 5s
1616
timeout: 2s
1717
retries: 10
18+
networks:
19+
- blockchain-net
1820

1921
blockchain-monitor:
2022
build:
@@ -26,11 +28,14 @@ services:
2628
environment:
2729
- PORT=8081
2830
- VAULT_ADDR=http://vault:8200
29-
- VAULT_TOKEN=myroot
31+
- VAULT_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
3032
depends_on:
3133
vault:
3234
condition: service_healthy
3335
command: "sh -c 'sleep 15 && ./blockchain-monitor'"
36+
networks:
37+
- blockchain-net
38+
3439
anomaly-detector:
3540
build:
3641
context: ./go-services/anomaly-detector
@@ -41,9 +46,11 @@ services:
4146
environment:
4247
- PORT=8082
4348
- VAULT_ADDR=http://vault:8200
44-
- VAULT_TOKEN=myroot
49+
- VAULT_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
4550
depends_on:
4651
- vault
52+
networks:
53+
- blockchain-net
4754

4855
dashboard:
4956
build:
@@ -55,9 +62,11 @@ services:
5562
environment:
5663
- PORT=8083
5764
- VAULT_ADDR=http://vault:8200
58-
- VAULT_TOKEN=myroot
65+
- VAULT_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
5966
depends_on:
6067
- vault
68+
networks:
69+
- blockchain-net
6170

6271
ai-agent:
6372
build:
@@ -69,8 +78,16 @@ services:
6978
environment:
7079
- PORT=8000
7180
- VAULT_ADDR=http://vault:8200
72-
- VAULT_AUTH_TOKEN=myroot
81+
- VAULT_AUTH_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
7382
depends_on:
7483
vault:
7584
condition: service_healthy
76-
command: "sh -c 'sleep 15 && gunicorn -k uvicorn.workers.UvicornWorker ai_agent:app'"
85+
command: "sh -c 'sleep 15 && gunicorn -k uvicorn.workers.UvicornWorker ai_agent:app'"
86+
volumes:
87+
- ./model_cache:/home/appuser/app/model_cache
88+
networks:
89+
- blockchain-net
90+
91+
networks:
92+
blockchain-net:
93+
driver: bridge

0 commit comments

Comments
 (0)