Skip to content
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

[![PR Title Check](https://github.com/meleksabit/blockchain-ai-security-platform-terraform-aws/actions/workflows/pr-title-linter.yml/badge.svg)](https://github.com/meleksabit/blockchain-ai-security-platform-terraform-aws/actions/workflows/pr-title-linter.yml) [![GitHub Release](https://img.shields.io/github/v/release/meleksabit/blockchain-ai-security-platform-terraform-aws)](https://github.com/meleksabit/blockchain-ai-security-platform-terraform-aws/releases)

### An ֎🇦🇮-powered security platform for detecting anomalies in blockchain transactions, built with Terraform <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/terraform.png" alt="Terraform" title="Terraform"/> for AWS <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/aws.png" alt="AWS" title="AWS"/> infrastructure, Helm <img height="32" width="32" src="https://cdn.simpleicons.org/helm" /> for Kubernetes <img height="32" width="32" src="https://cdn.simpleicons.org/kubernetes" /> deployments, and a CI/CD <img height="32" width="32" src="https://cdn.simpleicons.org/jenkins" /> pipeline. The platform integrates AI agents <img height="32" width="32" src="https://cdn.simpleicons.org/openai" />, Go <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/go.png" alt="Go" title="Go"/> microservices, RDS <img height="32" width="32" src="https://cdn.simpleicons.org/amazonrds" />, and containerized deployments for a robust DevSecOps solution.
### An ֎🇦🇮-powered security platform for detecting anomalies in blockchain transactions, built with Terraform <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/terraform.png" alt="Terraform" title="Terraform"/> for AWS <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/aws.png" alt="AWS" title="AWS"/> infrastructure, Helm <img height="32" width="32" src="https://cdn.simpleicons.org/helm" /> for Kubernetes <img height="32" width="32" src="https://cdn.simpleicons.org/kubernetes" /> deployments, and a CI/CD <img height="32" width="32" src="https://cdn.simpleicons.org/jenkins" /> pipeline. The platform integrates AI agents <img height="32" width="32" src="https://cdn.simpleicons.org/openai" />, Go <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/go.png" alt="Go" title="Go"/> microservices, RDS, and containerized deployments for a robust DevSecOps solution.

## Table of Contents
- [Implementation Overview](#implementation-overview)
Expand Down Expand Up @@ -36,6 +36,9 @@
- **AWS**: Deployed via Terraform Cloud.
- **Components**:
- <img width="33" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/python.png" alt="Python" title="Python"/> **AI Agent**: Core anomaly detection service (port 8000).
<p align="center">
<img src="Screenshot 2025-07-15 183315.png" width="733"/>
</p>
- <img height="32" width="32" src="https://cdn.simpleicons.org/go" /> **Go Microservices**:
- `blockchain-monitor`: Tracks transactions (port 8081).
- `anomaly-detector`: Analyzes anomalies (port 8082).
Expand Down Expand Up @@ -81,7 +84,7 @@

## 📝✅Prerequisites

1. <img height="32" width="32" src="https://cdn.simpleicons.org/amazonwebservices" /> **AWS Account**:
1. <img width="50" src="https://raw.githubusercontent.com/marwin1991/profile-technology-icons/refs/heads/main/icons/aws.png" alt="AWS" title="AWS"/> **AWS Account**:
- Active account with IAM user access keys (EKS, EC2, ELB, ECR, IAM, S3, RDS permissions).
- Region: `eu-central-1`.

Expand Down Expand Up @@ -419,6 +422,11 @@ Obtain an Infura API key by creating an account at <a href="https://infura.io">i
curl http://<blockchain-monitor-load-balancer>:8081/health
curl http://<ai-agent-load-balancer>:8000/health
```

<p align="center">
<img src="Screenshot 2025-07-12 215045.png" width="733"/>
</p>

- Ensure the `network` field matches the configured value.

4. **IAM Role (`TerraformCloudRole`)**:
Expand Down Expand Up @@ -583,6 +591,10 @@ Obtain an Infura API key by creating an account at <a href="https://infura.io">i
```
- Use the LoadBalancer URL (port 8083).

<p align="center">
<img src="Screenshot 2025-07-12 192906.png" width="733"/>
</p>

## 🏗️🧱📐Infrastructure Details
Infrastructure is managed in the `terraform/` folder:
- **Modules**: `eks`, `alb`, `s3`, `iam`, `network`, `vault`, `rds`, `vault`.
Expand Down
Binary file added Screenshot 2025-07-12 192906.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Screenshot 2025-07-12 215045.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Screenshot 2025-07-15 183315.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 4 additions & 2 deletions ai-agent/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,11 @@ COPY --from=builder /usr/local/lib/python3.12/site-packages/ /usr/local/lib/pyth
COPY --from=builder /usr/local/bin/ /usr/local/bin/
COPY ai_agent.py .
RUN useradd -m appuser && \
mkdir -p /home/appuser/app/model_cache && \
chown -R appuser:appuser /home/appuser/app/model_cache && \
mkdir -p /home/appuser/.cache && \
chown -R appuser:appuser /home/appuser
chown -R appuser:appuser /home/appuser/.cache
USER appuser
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:8000/health || exit 1
CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-w", "4", "-b", "0.0.0.0:8000", "ai_agent:app"]
CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-w", "1", "-b", "0.0.0.0:8000", "ai_agent:app"]
99 changes: 52 additions & 47 deletions ai-agent/ai_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
class RedactingFormatter(logging.Formatter):
def format(self, record):
msg = super().format(record)
infura_key = os.environ.get("INFURA_API_KEY", "unknown") # Check every time
infura_key = os.environ.get("INFURA_API_KEY", "unknown")
return re.sub(rf"https://(sepolia|ropsten)\.infura\.io/v3/{infura_key}", "https://[network].infura.io/v3/[REDACTED]", msg)

# Configure Logging
Expand All @@ -38,32 +38,37 @@
class AIModel:
instance = None
model_loaded = False
tokenizer = None
model = None

@classmethod
def get_instance(cls):
if cls.instance is None:
cls.instance = cls()
return cls.instance

def __init__(self):
pass

async def load_model(self):
cache_dir = "./model_cache"
try:
logger.info("Loading AI Model in background...")
self.tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir)
self.model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir)
logger.info("🏁Starting AI model loading at %s", time.ctime())
start_time = time.time()
# Check if cache exists and contains config.json
cache_path = os.path.join(cache_dir, "config.json")
local_only = os.path.exists(cache_path)
self.tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir, local_files_only=local_only)
self.model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment", cache_dir=cache_dir, local_files_only=local_only)
load_duration = time.time() - start_time
self.model_loaded = True
logger.info("AI Model Loaded Successfully!")
logger.info("⏳⌛AI model loaded in %.2f seconds", load_duration)
logger.info("🚀AI Model Loaded Successfully!✅")
except Exception as e:
logger.error(f"Error loading AI model: {e}")
logger.error(f"🚧 ⚠️Error loading AI model: {e}⚠️ 🚧")
self.model_loaded = False
raise

def analyze(self, tx_data, web3):
if not self.model_loaded:
raise RuntimeError("AI Model not loaded yet")
raise RuntimeError("AI Model not loaded yet")
text = f"TX: {tx_data['from']} -> {tx_data['to']}, Amount: {web3.from_wei(tx_data['value'], 'ether')} ETH, Gas: {tx_data['gas']}"
inputs = self.tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
with torch.no_grad():
Expand All @@ -73,37 +78,35 @@
if tx_data["value"] > historical_avg_value * 5:
anomaly_score += 0.2
if anomaly_score > 0.7:
return f"High Anomaly Score: {anomaly_score:.2f} -> Potential Risk!"
return f"/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿̿ ̿̿💥High Anomaly Score: {anomaly_score:.2f} -> Potential Risk!☣️☢️"
elif anomaly_score > 0.5:
return f"Medium Anomaly Score: {anomaly_score:.2f} -> Needs Review"
return f" 🕵️ Medium Anomaly Score: {anomaly_score:.2f} -> Needs Review👀"
else:
return f"Normal Transaction (Score: {anomaly_score:.2f})"
return f"👌Normal Transaction (Score: {anomaly_score:.2f})"

# Ensure hf_xet is installed
def ensure_hf_xet():
try:
import hf_xet
logger.info("hf_xet package is already installed")
logger.info("📦hf_xet package is already installed")
except ImportError:
logger.warning("hf_xet not installed in image, expected pre-installation. Falling back to HTTP download.")
logger.warning("🚨hf_xet not installed in image, expected pre-installation. Falling back to HTTP download.⚠️")

# Health endpoint
@app.get("/health")
async def health_check():
try:
web3 = connect_web3()
ai_model = AIModel.get_instance()
return {
"status": "healthy" if ai_model.model_loaded else "starting",
"web3_connected": web3.is_connected(),
"model_loaded": ai_model.model_loaded,
"network": NETWORK
}
if ai_model.model_loaded:
return {"status": "🌾💚healthy", "web3_connected": web3.is_connected(), "model_loaded": True, "network": NETWORK}
else:
return {"status": "⏳⌛loading", "web3_connected": web3.is_connected(), "model_loaded": False, "network": NETWORK}, 200
except HTTPException as e:
raise e
except Exception as e:
logger.error(f"Health check failed: {e}")
return {"status": "unhealthy", "error": "Health check failed due to an internal error"}
logger.error(f"⚠️👎Health check failed: {e}")
return {"ـــــــــــــــﮩ٨ـ❤️️status": "☣️☠️unhealthy", "⚠️error⚠️": str(e)}, 503

# Vault client setup
@lru_cache(maxsize=1)
Expand All @@ -113,7 +116,7 @@
client.token = os.environ.get("VAULT_AUTH_TOKEN")
if not client.is_authenticated():
raise Exception("Vault authentication failed")
logger.info("Vault client authenticated successfully")
logger.info("🔐Vault client authenticated successfully")
return client

# Secrets retrieval
Expand All @@ -124,13 +127,13 @@
secret = client.secrets.kv.v2.read_secret_version(path="infura", mount_point="secret")
api_key = secret["data"]["data"]["api_key"]
if api_key.startswith("https://"):
logger.warning("Infura key from Vault appears to be a full URL - extracting key")
logger.warning("⚠︎ ⚡︎Infura key from Vault appears to be a full URL - extracting key")
api_key = api_key.split("/")[-1]
os.environ["INFURA_API_KEY"] = api_key
logger.info("Infura key retrieved from Vault")
logger.info("🔑Infura key retrieved from Vault")
return api_key
except Exception as e:
logger.error(f"Vault Infura Error: {e}")
logger.error(f"Vault Infura Error: {e}")
raise

@retry(
Expand All @@ -145,22 +148,22 @@
infura_key = get_infura_key()
url = f"https://{network}.infura.io/v3/{infura_key}"
else:
logger.error(f"Unsupported network: {network}")
raise ValueError(f"Unsupported network: {network}")
logger.error(f"💀💻Unsupported network: {network}")
raise ValueError(f"💀💻Unsupported network: {network}")

try:
web3 = Web3(Web3.HTTPProvider(url))
connected = web3.is_connected()
if not connected:
logger.error(f"Web3 connection failed - {network} not reachable (key redacted)")
logger.error(f"🔗💔Web3 connection failed - {network} not reachable (key redacted)")
raise HTTPException(status_code=503, detail=f"Failed to connect to {network}")
logger.info(f"Connected to {network} blockchain!")
logger.info(f"🔗Connected to {network} blockchain!")
return web3
except HTTPError as e:
logger.error(f"HTTP error connecting to {network}: {e} (key redacted)")
logger.error(f"🌐❌HTTP error connecting to {network}: {e} (key redacted)")
raise
except Exception as e:
logger.error(f"Web3 connection error for {network}: {e} (key redacted)")
logger.error(f"🌐🔗⛓️Web3 connection error for {network}: {e} (key redacted)")
raise HTTPException(status_code=503, detail=f"{network} connection unavailable")

# Block caching
Expand All @@ -171,11 +174,11 @@
current_time = time.time()
latest_block = web3.eth.block_number
if latest_block in block_cache and (current_time - block_cache[latest_block]["timestamp"]) < CACHE_TTL:
logger.info(f"Using cached block {latest_block}")
logger.info(f"🧹🔗Using cached block {latest_block}")
return block_cache[latest_block]["data"]
block_data = web3.eth.get_block(latest_block, full_transactions=True)
block_cache[latest_block] = {"data": block_data, "timestamp": current_time}
logger.info(f"Fetched new block {latest_block}")
logger.info(f"🐕🦴Fetched new block {latest_block}")
return block_data

# Historical data
Expand All @@ -185,10 +188,10 @@
try:
block = web3.eth.get_block(block_num, full_transactions=True)
historical_data.append(block)
logger.info(f"Fetched historical block {block_num}")
logger.info(f"📜🏛️🏺Fetched historical block {block_num}")
await asyncio.sleep(1) # Avoid rate limits
except HTTPError as e:
logger.error(f"Infura rate limit hit: {e}")
logger.error(f"🛑✋Infura rate limit hit: {e}")
break
return historical_data

Expand All @@ -208,12 +211,12 @@
tx_data = {"from": tx.from_address, "to": tx.to_address, "value": int(tx.value), "gas": tx.gas}
ai_model = AIModel.get_instance()
result = ai_model.analyze(tx_data, web3)
logger.info(f"Transaction analyzed: {tx.from_address} -> {tx.to_address} | {result}")
logger.info(f"🧐Transaction analyzed: {tx.from_address} -> {tx.to_address} | {result}")
return {"result": result}
except HTTPException as e:
raise e
except Exception as e:
logger.error(f"Analyze failed: {e}")
logger.error(f"❌📉Analyze failed: {e}")
raise HTTPException(status_code=500, detail="Internal server error during analysis")

@app.on_event("startup")
Expand All @@ -222,30 +225,32 @@
ensure_hf_xet() # Ensure hf_xet is installed
web3 = connect_web3()
ai_model = AIModel.get_instance()
logger.info("Starting blockchain polling and historical fetch in background")
logger.info("1️⃣🚀Initiating ai-agent service")
asyncio.create_task(ai_model.load_model()) # Load model in background
logger.info("֎🇦🇮 ai-agent service ready")
logger.info("🏁Starting blockchain polling and historical fetch in background")
asyncio.create_task(poll_blockchain(web3))
asyncio.create_task(fetch_historical_blocks(web3, web3.eth.block_number - 1000, 1000))
asyncio.create_task(ai_model.load_model()) # Load model in background
logger.info("Startup tasks scheduled")
logger.info("🕘🗓️Startup tasks scheduled")
except HTTPException as e:
logger.error(f"Startup failed with HTTP exception: {e.detail}")
logger.error(f"🔴Startup failed with HTTP exception: {e.detail}")
except Exception as e:
logger.error(f"Startup failed: {e}")
logger.error(f"🔴Startup failed: {e}")

async def poll_blockchain(web3):
ai_model = AIModel.get_instance()
while not ai_model.model_loaded:
logger.info("Waiting for AI model to load before polling...")
logger.info("...⏳Waiting for AI model to load before polling...")
await asyncio.sleep(5)
while True:
try:
block_data = get_latest_block_data(web3)
for tx in block_data["transactions"]:
result = ai_model.analyze(tx, web3)
if "High" in result or "Medium" in result:
logger.warning(f"Anomaly detected in block {block_data['number']}: {result}")
logger.warning(f"/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿̿ ̿̿💥Anomaly detected in block {block_data['number']}: {result}")
except Exception as e:
logger.error(f"Polling error: {e}")
logger.error(f"🔴🗳️Polling error: {e}")
await asyncio.sleep(10)

if __name__ == "__main__":
Expand Down
31 changes: 24 additions & 7 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
services:
vault:
image: hashicorp/vault:1.19.0
image: hashicorp/vault:1.20.0
container_name: vault
ports:
- "8200:8200"
environment:
- VAULT_DEV_ROOT_TOKEN_ID=myroot
- VAULT_DEV_ROOT_TOKEN_ID=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
- VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200
command: server -dev
cap_add:
Expand All @@ -15,6 +15,8 @@ services:
interval: 5s
timeout: 2s
retries: 10
networks:
- blockchain-net

blockchain-monitor:
build:
Expand All @@ -26,11 +28,14 @@ services:
environment:
- PORT=8081
- VAULT_ADDR=http://vault:8200
- VAULT_TOKEN=myroot
- VAULT_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
depends_on:
vault:
condition: service_healthy
command: "sh -c 'sleep 15 && ./blockchain-monitor'"
networks:
- blockchain-net

anomaly-detector:
build:
context: ./go-services/anomaly-detector
Expand All @@ -41,9 +46,11 @@ services:
environment:
- PORT=8082
- VAULT_ADDR=http://vault:8200
- VAULT_TOKEN=myroot
- VAULT_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
depends_on:
- vault
networks:
- blockchain-net

dashboard:
build:
Expand All @@ -55,9 +62,11 @@ services:
environment:
- PORT=8083
- VAULT_ADDR=http://vault:8200
- VAULT_TOKEN=myroot
- VAULT_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
depends_on:
- vault
networks:
- blockchain-net

ai-agent:
build:
Expand All @@ -69,8 +78,16 @@ services:
environment:
- PORT=8000
- VAULT_ADDR=http://vault:8200
- VAULT_AUTH_TOKEN=myroot
- VAULT_AUTH_TOKEN=${VAULT_TOKEN} # Uses exported VAULT_TOKEN
depends_on:
vault:
condition: service_healthy
command: "sh -c 'sleep 15 && gunicorn -k uvicorn.workers.UvicornWorker ai_agent:app'"
command: "sh -c 'sleep 15 && gunicorn -k uvicorn.workers.UvicornWorker ai_agent:app'"
volumes:
- ./model_cache:/home/appuser/app/model_cache
networks:
- blockchain-net

networks:
blockchain-net:
driver: bridge
Loading