-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic setup for auth-at22-aks #1280
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThis pull request introduces a new GitHub Actions workflow to automate Terraform plan and apply procedures on the main branch, triggered by pushes, pull requests, and manual dispatch events. In tandem, several Terraform configuration files have been added or updated to provision an AKS cluster and its associated resources in Azure. These changes cover cluster deployment, network setup, monitoring with Azure Monitor, and role assignments for ACR pull access, as well as the necessary provider and variable definitions to support the infrastructure. Changes
Sequence Diagram(s)sequenceDiagram
participant U as User/Event
participant GH as GitHub Actions
participant Plan as Terraform Plan Job
participant Deploy as Terraform Apply Job
U->>GH: Push/PR event or Manual Dispatch
GH->>Plan: Trigger Terraform plan with workflow inputs
Plan-->>GH: Return plan results
alt Plan Successful
GH->>Deploy: Trigger Terraform apply job
Deploy-->>GH: Return apply results
else Plan Fails
GH-->>U: Notify failure
end
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
Terraform environment testFormat and Style 🖌
|
Context | Values |
---|---|
Pusher | @bengtfredh |
Action | pull_request |
Working Directory | ./infrastructure/altinn-auth-test/auth-at22-aks-rg |
State File | github.com/altinn/altinn-platform/environments/test/auth-at22-aks-rg.tfstate |
Plan File | github.com_altinn_altinn-platform_environments_test_auth-at22-aks-rg.tfstate.tfplan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (8)
infrastructure/altinn-auth-test/auth-at22-aks-rg/variables.tf (1)
1-4
: Enhance variable description for better clarity.While the variable definition is correct, consider enhancing the description to be more specific about its purpose and format.
variable "subscription_id" { type = string - description = "Subscription id to deploy services" + description = "Azure subscription ID (UUID) where resources will be deployed" }infrastructure/modules/aks/flux.tf (1)
1-5
: Consider adding Flux configuration settings.The current Flux extension setup is minimal. Consider adding:
- Specific version configuration
- GitOps repository configuration
- Flux configuration settings for better control over the deployment process
Example configuration block to consider adding:
configuration_settings = { "helm-controller.enabled" = true "source-controller.enabled" = true "kustomize-controller.enabled" = true "notification-controller.enabled" = true "flux.version" = "2.0.0" }infrastructure/altinn-auth-test/auth-at22-aks-rg/providers.tf (1)
17-22
: Consider additional essential resource providers.The current list of resource providers is good, but consider adding these commonly needed providers for AKS:
resource_providers_to_register = [ "Microsoft.Monitor", "Microsoft.AlertsManagement", "Microsoft.Dashboard", - "Microsoft.KubernetesConfiguration" + "Microsoft.KubernetesConfiguration", + "Microsoft.ContainerService", + "Microsoft.OperationsManagement", + "Microsoft.OperationalInsights" ]infrastructure/altinn-auth-test/auth-at22-aks-rg/aks.tf (1)
19-24
: Review system node pool configuration.The system node pool configuration has potential issues:
standard_b2s_v2
might not be suitable for system workloads- Min count of 1 node doesn't provide high availability
Consider these changes:
syspool = { - vm_size = "standard_b2s_v2" + vm_size = "Standard_D4s_v3" # More suitable for system workloads auto_scaling_enabled = "true" node_count = "1" - min_count = "1" + min_count = "2" # Ensure high availability max_count = "6"infrastructure/modules/aks/variables.tf (1)
17-20
: Add validation for aks_sku_tier and kubernetes_version.Consider adding validation rules to ensure:
aks_sku_tier
is either "Free" or "Standard"kubernetes_version
follows semantic versioning patternvariable "aks_sku_tier" { type = string description = "Kubernetes sku" + validation { + condition = contains(["Free", "Standard"], var.aks_sku_tier) + error_message = "aks_sku_tier must be either Free or Standard." + } } variable "kubernetes_version" { type = string description = "Kubernetes version" + validation { + condition = can(regex("^\\d+\\.\\d+\\.\\d+$", var.kubernetes_version)) + error_message = "kubernetes_version must follow semantic versioning (e.g., 1.25.0)." + } }Also applies to: 21-24
infrastructure/modules/aks/network.tf (1)
24-25
: Potential DNS conflict with identical domain name labels.Both IPv4 and IPv6 public IPs use the same domain name label. This might cause DNS resolution issues. Consider using distinct labels or a suffix for IPv6.
- domain_name_label = "${var.prefix}-${var.environment}" + domain_name_label = "${var.prefix}-${var.environment}-v4" - domain_name_label = "${var.prefix}-${var.environment}" + domain_name_label = "${var.prefix}-${var.environment}-v6"Also applies to: 35-36
.github/workflows/auth-at22-aks-rg.yml (2)
48-48
: Add environment protection rules.The
test
environment lacks protection rules. Consider adding:
- Required reviewers
- Wait timer
- Branch restrictions
Also applies to: 68-68
64-64
: Specify Terraform version constraint.Using
latest
for Terraform version could lead to unexpected behavior if a new version introduces breaking changes. Consider pinning to a specific version or version range.- tf_version: latest + tf_version: "~> 1.5.0"Also applies to: 85-85
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
.github/workflows/auth-at22-aks-rg.yml
(1 hunks)infrastructure/altinn-auth-test/auth-at22-aks-rg/aks.tf
(1 hunks)infrastructure/altinn-auth-test/auth-at22-aks-rg/providers.tf
(1 hunks)infrastructure/altinn-auth-test/auth-at22-aks-rg/terraform.tfvars
(1 hunks)infrastructure/altinn-auth-test/auth-at22-aks-rg/variables.tf
(1 hunks)infrastructure/modules/aks/aks.tf
(1 hunks)infrastructure/modules/aks/data.tf
(1 hunks)infrastructure/modules/aks/flux.tf
(1 hunks)infrastructure/modules/aks/network.tf
(1 hunks)infrastructure/modules/aks/providers.tf
(1 hunks)infrastructure/modules/aks/roles.tf
(1 hunks)infrastructure/modules/aks/variables.tf
(1 hunks)
✅ Files skipped from review due to trivial changes (3)
- infrastructure/altinn-auth-test/auth-at22-aks-rg/terraform.tfvars
- infrastructure/modules/aks/data.tf
- infrastructure/modules/aks/providers.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/aks.tf
[HIGH] 6-86: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources
(CKV_AZURE_227)
[HIGH] 88-109: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources
(CKV_AZURE_227)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (5)
infrastructure/modules/aks/roles.tf (1)
1-9
: Review security implications of skipping AAD check.The configuration skips the Azure AD service principal check, which could potentially impact security. Please verify if this is intentionally required for your use case.
Also, consider adding tags for better resource tracking:
resource "azurerm_role_assignment" "aks_acrpull" { for_each = toset(var.aks_acrpull_scopes) principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id role_definition_name = "AcrPull" scope = each.value skip_service_principal_aad_check = true + timeouts { + create = "30m" + } depends_on = [azurerm_kubernetes_cluster.aks] }infrastructure/altinn-auth-test/auth-at22-aks-rg/providers.tf (1)
4-6
: Update azurerm provider version.The specified version
~> 4.16.0
is outdated. Consider updating to a more recent version for security patches and new features.azurerm = { source = "hashicorp/azurerm" - version = "~> 4.16.0" + version = "~> 4.79.0" # Or the latest stable version }✅ Verification successful
Align provider version with project standard
The specific version constraint
~> 4.16.0
is unnecessarily restrictive. The rest of the codebase consistently uses~> 4.0
, which allows for minor version updates while maintaining major version compatibility. Recommend changing to match the project standard:azurerm = { source = "hashicorp/azurerm" - version = "~> 4.16.0" + version = "~> 4.0" }🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash # Find all Terraform files echo "=== Terraform Files ===" fd -e tf echo -e "\n=== Provider Versions ===" # Search for provider version specifications rg "version.*=.*\"~>.*\"" -A 2 -B 2 echo -e "\n=== README Content ===" # Check if there's any documentation about provider versions cat README.md 2>/dev/null || echo "README.md not found" echo -e "\n=== Provider Configurations ===" # Look for provider configurations and features rg "provider \"azurerm\"" -A 10Length of output: 15105
infrastructure/modules/aks/network.tf (1)
19-20
: Consider resource group placement for public IPs.The public IPs are created in the node resource group while the IP prefixes are in the main resource group. This split might complicate resource management. Consider using the same resource group for all networking resources.
Also applies to: 29-30
infrastructure/modules/aks/aks.tf (2)
46-48
: Verify overlay networking mode compatibility.The network plugin mode is set to "overlay". This mode:
- Requires containerd runtime
- Has different performance characteristics
- May affect existing workloads
Please verify these requirements are met.
6-86
:⚠️ Potential issueEnable disk encryption for AKS cluster.
The cluster configuration is missing disk encryption settings. This is flagged as a security concern by static analysis.
resource "azurerm_kubernetes_cluster" "aks" { # ... existing configuration ... + disk_encryption_set_id = azurerm_disk_encryption_set.aks.id }
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 6-86: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources
(CKV_AZURE_227)
module "aks" { | ||
source = "../../modules/aks" | ||
prefix = "auth" | ||
environment = "at22" | ||
subscription_id = var.subscription_id | ||
location = "norwayeast" | ||
aks_sku_tier = "Free" | ||
kubernetes_version = "1.30" | ||
vnet_address_space = [ | ||
"10.202.72.0/21", | ||
"fd0a:7204:c37f:900::/56" | ||
] | ||
subnet_address_prefixes = { | ||
aks_syspool = ["fdb8:8b5f:00f0:901::/64", "10.202.72.0/24"] | ||
aks_workpool = ["fdb8:8b5f:00f0:902::/64", "10.202.73.0/24"] | ||
} | ||
pool_configs = { | ||
syspool = { | ||
vm_size = "standard_b2s_v2" | ||
auto_scaling_enabled = "true" | ||
node_count = "1" | ||
min_count = "1" | ||
max_count = "6" | ||
} | ||
workpool = { | ||
vm_size = "standard_b2s_v2" | ||
auto_scaling_enabled = "true" | ||
node_count = "0" | ||
min_count = "0" | ||
max_count = "6" | ||
} | ||
} | ||
aks_acrpull_scopes = [ | ||
"/subscriptions/a6e9ee7d-2b65-41e1-adfb-0c8c23515cf9/resourceGroups/acr/providers/Microsoft.ContainerRegistry/registries/altinncr" | ||
] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add essential AKS configurations.
The current configuration is missing several important AKS settings that should be considered for production use.
Consider adding these configurations:
network_policy = "azure"
network_plugin = "azure"
network_plugin_mode = "overlay"
service_cidr = "10.0.0.0/16"
dns_service_ip = "10.0.0.10"
maintenance_window = {
allowed = [
{
day = "Sunday"
hours = [21, 22, 23]
}
]
}
monitoring_metrics_enabled = true
azure_policy_enabled = true
key_vault_secrets_provider = {
enabled = true
rotation_poll_interval = "2m"
}
subscription_id = var.subscription_id | ||
location = "norwayeast" | ||
aks_sku_tier = "Free" | ||
kubernetes_version = "1.30" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix invalid Kubernetes version.
The specified version "1.30" is invalid. As of January 2025, this version doesn't exist.
- kubernetes_version = "1.30"
+ kubernetes_version = "1.28" # Or the latest stable version supported by Azure
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
kubernetes_version = "1.30" | |
kubernetes_version = "1.28" # Or the latest stable version supported by Azure |
type = map(object({ | ||
vm_size = string | ||
auto_scaling_enabled = string | ||
node_count = string | ||
min_count = string | ||
max_count = string | ||
})) | ||
description = "variables for nodepools" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix data types in pool_configs variable.
The auto_scaling_enabled
and node_count
fields should use appropriate data types:
auto_scaling_enabled
should be boolean instead of stringnode_count
,min_count
, andmax_count
should be number instead of string
variable "pool_configs" {
type = map(object({
vm_size = string
- auto_scaling_enabled = string
- node_count = string
- min_count = string
- max_count = string
+ auto_scaling_enabled = bool
+ node_count = number
+ min_count = number
+ max_count = number
}))
description = "variables for nodepools"
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
type = map(object({ | |
vm_size = string | |
auto_scaling_enabled = string | |
node_count = string | |
min_count = string | |
max_count = string | |
})) | |
description = "variables for nodepools" | |
type = map(object({ | |
vm_size = string | |
auto_scaling_enabled = bool | |
node_count = number | |
min_count = number | |
max_count = number | |
})) | |
description = "variables for nodepools" |
TF_STATE_NAME: auth-at22-aks-rg.tfstate | ||
TF_PROJECT: ./infrastructure/altinn-auth-test/auth-at22-aks-rg | ||
ARM_CLIENT_ID: ${{ vars.TF_AZURE_CLIENT_ID }} | ||
ARM_SUBSCRIPTION_ID: 37bac63a-b964-46b2-8de8-ba93c432ea1f |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move subscription ID to GitHub variables.
The Azure subscription ID is hardcoded in the workflow file. This should be moved to GitHub variables for better security and maintainability.
- ARM_SUBSCRIPTION_ID: 37bac63a-b964-46b2-8de8-ba93c432ea1f
+ ARM_SUBSCRIPTION_ID: ${{ vars.TF_AZURE_SUBSCRIPTION_ID }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
ARM_SUBSCRIPTION_ID: 37bac63a-b964-46b2-8de8-ba93c432ea1f | |
ARM_SUBSCRIPTION_ID: ${{ vars.TF_AZURE_SUBSCRIPTION_ID }} |
# azure_active_directory_role_based_access_control { | ||
# azure_rbac_enabled = true | ||
# } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uncomment and configure RBAC.
The RBAC configuration is commented out. Azure RBAC should be enabled for better security.
- # azure_active_directory_role_based_access_control {
- # azure_rbac_enabled = true
- # }
+ azure_active_directory_role_based_access_control {
+ azure_rbac_enabled = true
+ managed = true
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# azure_active_directory_role_based_access_control { | |
# azure_rbac_enabled = true | |
# } | |
azure_active_directory_role_based_access_control { | |
azure_rbac_enabled = true | |
managed = true | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
infrastructure/modules/aks/aks.tf (1)
72-74
:⚠️ Potential issueUncomment and configure RBAC.
The RBAC configuration is commented out. Azure RBAC should be enabled for better security.
- # azure_active_directory_role_based_access_control { - # azure_rbac_enabled = true - # } + azure_active_directory_role_based_access_control { + azure_rbac_enabled = true + managed = true + }
🧹 Nitpick comments (1)
infrastructure/modules/aks/monitor.tf (1)
11-15
: Consider adding tags for resource management.Both monitoring resources would benefit from having tags for better resource management and cost allocation.
Apply this diff to add essential tags:
resource "azurerm_monitor_workspace" "aks" { name = "${var.prefix}-${var.environment}-aks-amw" resource_group_name = azurerm_resource_group.aks.name location = azurerm_resource_group.aks.location + tags = { + Environment = var.environment + Terraform = "true" + Service = "AKS Monitoring" + } }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
infrastructure/modules/aks/aks.tf
(1 hunks)infrastructure/modules/aks/monitor.tf
(1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/aks.tf
[HIGH] 6-92: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources
(CKV_AZURE_227)
[HIGH] 94-115: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources
(CKV_AZURE_227)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
🔇 Additional comments (3)
infrastructure/modules/aks/monitor.tf (1)
1-9
: Consider increasing the retention period for Log Analytics Workspace.The current retention period of 30 days might be insufficient for compliance requirements and long-term analysis. Consider increasing it based on your compliance needs and data analysis requirements.
infrastructure/modules/aks/aks.tf (2)
25-43
: Consider security implications of max_pods setting.The max_pods setting of 200 per node is relatively high. This could potentially impact:
- Node stability under high load
- Network performance
- Security isolation between pods
Consider reducing this value based on your workload requirements and node size.
48-52
: Verify IPv6 configuration compatibility.The network configuration includes both IPv4 and IPv6. Ensure that:
- All dependent services support IPv6
- Network security groups are properly configured for IPv6
- Load balancer and ingress controllers support IPv6
✅ Verification successful
IPv6 configuration is properly implemented and supported ✅
The verification confirms that:
- Azure Load Balancer (standard SKU) and Traefik ingress controller are configured for IPv6
- Network resources include proper IPv6 public IPs and prefixes
- Dual-stack networking is correctly configured with appropriate CIDR ranges
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for IPv6-related configurations in other files rg "IPv6|fd10:" -A 5Length of output: 3939
resource "azurerm_kubernetes_cluster" "aks" { | ||
lifecycle { | ||
ignore_changes = [ | ||
workload_autoscaler_profile, | ||
default_node_pool[0].node_count, | ||
windows_profile, | ||
] | ||
} | ||
name = "${var.prefix}-${var.environment}-aks" | ||
location = azurerm_resource_group.aks.location | ||
resource_group_name = azurerm_resource_group.aks.name | ||
dns_prefix = "${var.prefix}-${var.environment}" | ||
sku_tier = var.aks_sku_tier | ||
kubernetes_version = var.kubernetes_version | ||
automatic_upgrade_channel = "patch" | ||
node_os_upgrade_channel = "NodeImage" | ||
oidc_issuer_enabled = true | ||
workload_identity_enabled = true | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enable disk encryption for improved security.
Based on the static analysis, the AKS cluster should have disk encryption enabled for temp disks, caches, and data flows.
Add the following configuration to enable encryption:
workload_identity_enabled = true
+ disk_encryption_set_id = azurerm_disk_encryption_set.aks.id
+
+ security_profile {
+ encryption {
+ enable_encryption_at_host = true
+ }
+ }
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 6-92: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources
(CKV_AZURE_227)
resource "azurerm_kubernetes_cluster_node_pool" "workpool" { | ||
lifecycle { | ||
ignore_changes = [ | ||
node_count, | ||
] | ||
} | ||
name = "workpool" | ||
os_sku = "AzureLinux" | ||
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id | ||
vnet_subnet_id = azurerm_subnet.aks["aks_workpool"].id | ||
max_pods = 200 | ||
auto_scaling_enabled = var.pool_configs["workpool"].auto_scaling_enabled | ||
node_count = var.pool_configs["workpool"].node_count | ||
vm_size = var.pool_configs["workpool"].vm_size | ||
min_count = var.pool_configs["workpool"].min_count | ||
max_count = var.pool_configs["workpool"].max_count | ||
zones = ["1", "2", "3"] | ||
orchestrator_version = var.kubernetes_version | ||
upgrade_settings { | ||
max_surge = "10%" | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apply consistent encryption settings to the workpool.
The workpool node pool should have the same encryption settings as the main cluster for consistent security.
Add disk encryption configuration to match the main cluster:
upgrade_settings {
max_surge = "10%"
}
+ enable_host_encryption = true
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_kubernetes_cluster_node_pool" "workpool" { | |
lifecycle { | |
ignore_changes = [ | |
node_count, | |
] | |
} | |
name = "workpool" | |
os_sku = "AzureLinux" | |
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id | |
vnet_subnet_id = azurerm_subnet.aks["aks_workpool"].id | |
max_pods = 200 | |
auto_scaling_enabled = var.pool_configs["workpool"].auto_scaling_enabled | |
node_count = var.pool_configs["workpool"].node_count | |
vm_size = var.pool_configs["workpool"].vm_size | |
min_count = var.pool_configs["workpool"].min_count | |
max_count = var.pool_configs["workpool"].max_count | |
zones = ["1", "2", "3"] | |
orchestrator_version = var.kubernetes_version | |
upgrade_settings { | |
max_surge = "10%" | |
} | |
} | |
resource "azurerm_kubernetes_cluster_node_pool" "workpool" { | |
lifecycle { | |
ignore_changes = [ | |
node_count, | |
] | |
} | |
name = "workpool" | |
os_sku = "AzureLinux" | |
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id | |
vnet_subnet_id = azurerm_subnet.aks["aks_workpool"].id | |
max_pods = 200 | |
auto_scaling_enabled = var.pool_configs["workpool"].auto_scaling_enabled | |
node_count = var.pool_configs["workpool"].node_count | |
vm_size = var.pool_configs["workpool"].vm_size | |
min_count = var.pool_configs["workpool"].min_count | |
max_count = var.pool_configs["workpool"].max_count | |
zones = ["1", "2", "3"] | |
orchestrator_version = var.kubernetes_version | |
upgrade_settings { | |
max_surge = "10%" | |
} | |
enable_host_encryption = true | |
} |
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 94-115: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources
(CKV_AZURE_227)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
infrastructure/modules/aks/flux.tf
(1 hunks)infrastructure/modules/aks/monitor.tf
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- infrastructure/modules/aks/flux.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/monitor.tf
[HIGH] 21-39: Ensure that Storage blobs restrict public access
(CKV_AZURE_190)
[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key
(CKV2_AZURE_1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (4)
infrastructure/modules/aks/monitor.tf (4)
1-9
: Log Analytics Workspace Configuration is SolidThe configuration for the Azure Log Analytics Workspace is implemented correctly with a consistent naming convention, retention period, and a system-assigned identity.
11-15
: Monitor Workspace Setup is ClearThe Azure Monitor Workspace is correctly defined with appropriate references to the resource group and location.
17-19
: Random ID Generation is CorrectThe random ID resource is straightforward and correctly set up for generating a unique suffix, aiding in storage account name uniqueness.
41-54
: Diagnostic Setting Configuration Looks AppropriateThe diagnostic setting for the AKS cluster correctly targets the cluster and links to the storage account for log retention. Note that metrics collection is explicitly disabled, which appears intentional. Ensure that the referenced
azurerm_kubernetes_cluster.aks
resource exists in your deployment context.
resource "azurerm_storage_account" "aks" { | ||
name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}" | ||
resource_group_name = azurerm_resource_group.aks.name | ||
location = azurerm_resource_group.aks.location | ||
account_tier = "Standard" | ||
account_replication_type = "ZRS" | ||
account_kind = "StorageV2" | ||
min_tls_version = "TLS1_2" | ||
is_hns_enabled = true | ||
|
||
network_rules { | ||
default_action = "Deny" | ||
bypass = ["AzureServices"] | ||
ip_rules = [ | ||
] | ||
virtual_network_subnet_ids = [ | ||
] | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Security: Restrict Public Access and Consider CMK Encryption
The Azure Storage Account is configured with standard parameters. However, please note that:
- Although network rules are set to "Deny" by default, it is best practice to explicitly restrict public access by adding the attribute
allow_blob_public_access = false
. - Additionally, Checkov has flagged that for critical data, encryption using a Customer Managed Key (CMK) might be required. Please evaluate if this applies to your use case and add a CMK configuration if necessary.
Suggested diff to add explicit public access restriction:
resource "azurerm_storage_account" "aks" {
name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
account_tier = "Standard"
account_replication_type = "ZRS"
account_kind = "StorageV2"
min_tls_version = "TLS1_2"
+ allow_blob_public_access = false
is_hns_enabled = true
network_rules {
default_action = "Deny"
bypass = ["AzureServices"]
ip_rules = [
]
virtual_network_subnet_ids = [
]
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_storage_account" "aks" { | |
name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
account_tier = "Standard" | |
account_replication_type = "ZRS" | |
account_kind = "StorageV2" | |
min_tls_version = "TLS1_2" | |
is_hns_enabled = true | |
network_rules { | |
default_action = "Deny" | |
bypass = ["AzureServices"] | |
ip_rules = [ | |
] | |
virtual_network_subnet_ids = [ | |
] | |
} | |
} | |
resource "azurerm_storage_account" "aks" { | |
name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
account_tier = "Standard" | |
account_replication_type = "ZRS" | |
account_kind = "StorageV2" | |
min_tls_version = "TLS1_2" | |
allow_blob_public_access = false | |
is_hns_enabled = true | |
network_rules { | |
default_action = "Deny" | |
bypass = ["AzureServices"] | |
ip_rules = [ | |
] | |
virtual_network_subnet_ids = [ | |
] | |
} | |
} |
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 21-39: Ensure that Storage blobs restrict public access
(CKV_AZURE_190)
[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key
(CKV2_AZURE_1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
infrastructure/modules/aks/monitor.tf
(1 hunks)infrastructure/modules/aks/prometheus.tf
(1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/monitor.tf
[HIGH] 21-39: Ensure that Storage blobs restrict public access
(CKV_AZURE_190)
[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key
(CKV2_AZURE_1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (5)
infrastructure/modules/aks/monitor.tf (5)
1-9
: Log Analytics Workspace Configuration Looks Good
The configuration for the Log Analytics Workspace is clear and follows a predictable naming scheme with a 30‐day retention period.
11-15
: Monitor Workspace Resource
The minimal configuration for the monitor workspace is acceptable given the current requirements.
17-19
: Random ID Resource – LGTM
This resource generates a short unique identifier as intended.
41-54
: Monitor Diagnostic Setting – Verification of Log and Metric Configuration
The diagnostic setting correctly targets the AKS cluster and routes logs to the storage account. Please verify that the selected log category ("kube-audit-admin") and the disabled metric collection are intentional choices. Additional log categories or metric collections may be required based on your operational needs.
21-39
: 🛠️ Refactor suggestionStorage Account – Enhance Security Settings
While the storage account configuration is functionally correct, two security enhancements are recommended:
- Restrict Public Blob Access: Explicitly add
allow_blob_public_access = false
to ensure that blobs are not inadvertently exposed.- Consider Encryption with a Customer Managed Key: If the storage account will hold critical data, configuring encryption with a customer managed key should be evaluated and added as needed.
Proposed diff snippet for adding the public access restriction:
resource "azurerm_storage_account" "aks" { name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}" resource_group_name = azurerm_resource_group.aks.name location = azurerm_resource_group.aks.location account_tier = "Standard" account_replication_type = "ZRS" account_kind = "StorageV2" min_tls_version = "TLS1_2" is_hns_enabled = true + allow_blob_public_access = false network_rules { default_action = "Deny" bypass = ["AzureServices"] ip_rules = [ ] virtual_network_subnet_ids = [ ] } }
Likely invalid or redundant comment.
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 21-39: Ensure that Storage blobs restrict public access
(CKV_AZURE_190)
[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key
(CKV2_AZURE_1)
resource "azurerm_monitor_data_collection_endpoint" "aks" { | ||
name = "${azurerm_kubernetes_cluster.aks.name}}-mdce" | ||
resource_group_name = azurerm_resource_group.aks.name | ||
location = azurerm_resource_group.aks.location | ||
kind = "Linux" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix Extra Brace in Data Collection Endpoint Name
The name attribute on line 2 contains an extra closing curly bracket (}}
). This may lead to unexpected naming issues.
Suggested fix:
- name = "${azurerm_kubernetes_cluster.aks.name}}-mdce"
+ name = "${azurerm_kubernetes_cluster.aks.name}-mdce"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_monitor_data_collection_endpoint" "aks" { | |
name = "${azurerm_kubernetes_cluster.aks.name}}-mdce" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
kind = "Linux" | |
} | |
resource "azurerm_monitor_data_collection_endpoint" "aks" { | |
name = "${azurerm_kubernetes_cluster.aks.name}-mdce" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
kind = "Linux" | |
} |
resource "azurerm_monitor_data_collection_rule_association" "aks" { | ||
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcra" | ||
target_resource_id = azurerm_kubernetes_cluster.aks.id | ||
data_collection_rule_id = azurerm_monitor_data_collection_rule.aks.id | ||
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster." | ||
depends_on = [ | ||
azurerm_monitor_data_collection_rule.aks | ||
] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Resolve Extra Brace in Data Collection Rule Association Name
The name on line 42 also contains an extra }
. Correcting this is necessary for predictable resource naming.
Suggested fix:
- name = "${azurerm_kubernetes_cluster.aks.name}}-mdcra"
+ name = "${azurerm_kubernetes_cluster.aks.name}-mdcra"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_monitor_data_collection_rule_association" "aks" { | |
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcra" | |
target_resource_id = azurerm_kubernetes_cluster.aks.id | |
data_collection_rule_id = azurerm_monitor_data_collection_rule.aks.id | |
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster." | |
depends_on = [ | |
azurerm_monitor_data_collection_rule.aks | |
] | |
} | |
resource "azurerm_monitor_data_collection_rule_association" "aks" { | |
- name = "${azurerm_kubernetes_cluster.aks.name}}-mdcra" | |
+ name = "${azurerm_kubernetes_cluster.aks.name}-mdcra" | |
target_resource_id = azurerm_kubernetes_cluster.aks.id | |
data_collection_rule_id = azurerm_monitor_data_collection_rule.aks.id | |
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster." | |
depends_on = [ | |
azurerm_monitor_data_collection_rule.aks | |
] | |
} |
resource "azurerm_monitor_data_collection_rule" "aks" { | ||
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcr" | ||
resource_group_name = azurerm_resource_group.aks.name | ||
location = azurerm_resource_group.aks.location | ||
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.aks.id | ||
kind = "Linux" | ||
|
||
destinations { | ||
monitor_account { | ||
monitor_account_id = azurerm_monitor_workspace.aks.id | ||
name = azurerm_monitor_workspace.aks.name | ||
} | ||
} | ||
|
||
data_flow { | ||
streams = ["Microsoft-PrometheusMetrics"] | ||
destinations = ["${azurerm_monitor_workspace.aks.name}"] | ||
} | ||
|
||
|
||
data_sources { | ||
prometheus_forwarder { | ||
streams = ["Microsoft-PrometheusMetrics"] | ||
name = "PrometheusDataSource" | ||
} | ||
} | ||
|
||
description = "DCR for Azure Monitor Metrics Profile (Managed Prometheus)" | ||
depends_on = [ | ||
azurerm_monitor_data_collection_endpoint.aks | ||
] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Correct Name Interpolation in Data Collection Rule
Similarly, the name attribute on line 9 for the data collection rule resource includes an extraneous }
. Please remove the extra brace to ensure the name is interpolated correctly.
Suggested fix:
- name = "${azurerm_kubernetes_cluster.aks.name}}-mdcr"
+ name = "${azurerm_kubernetes_cluster.aks.name}-mdcr"
Additionally, review the destination and data flow blocks; they appear to be configured properly for routing the "Microsoft-PrometheusMetrics" stream to your monitor workspace.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_monitor_data_collection_rule" "aks" { | |
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcr" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.aks.id | |
kind = "Linux" | |
destinations { | |
monitor_account { | |
monitor_account_id = azurerm_monitor_workspace.aks.id | |
name = azurerm_monitor_workspace.aks.name | |
} | |
} | |
data_flow { | |
streams = ["Microsoft-PrometheusMetrics"] | |
destinations = ["${azurerm_monitor_workspace.aks.name}"] | |
} | |
data_sources { | |
prometheus_forwarder { | |
streams = ["Microsoft-PrometheusMetrics"] | |
name = "PrometheusDataSource" | |
} | |
} | |
description = "DCR for Azure Monitor Metrics Profile (Managed Prometheus)" | |
depends_on = [ | |
azurerm_monitor_data_collection_endpoint.aks | |
] | |
} | |
resource "azurerm_monitor_data_collection_rule" "aks" { | |
name = "${azurerm_kubernetes_cluster.aks.name}-mdcr" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.aks.id | |
kind = "Linux" | |
destinations { | |
monitor_account { | |
monitor_account_id = azurerm_monitor_workspace.aks.id | |
name = azurerm_monitor_workspace.aks.name | |
} | |
} | |
data_flow { | |
streams = ["Microsoft-PrometheusMetrics"] | |
destinations = ["${azurerm_monitor_workspace.aks.name}"] | |
} | |
data_sources { | |
prometheus_forwarder { | |
streams = ["Microsoft-PrometheusMetrics"] | |
name = "PrometheusDataSource" | |
} | |
} | |
description = "DCR for Azure Monitor Metrics Profile (Managed Prometheus)" | |
depends_on = [ | |
azurerm_monitor_data_collection_endpoint.aks | |
] | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
infrastructure/modules/aks/amw.tf (1)
22-26
: Nitpick: Simplify destination interpolation.
While valid, the use of interpolation in the destinations block on line 24 is unnecessary in Terraform 0.12+ and can be simplified. For maintainability, consider updating it to:destinations = [azurerm_monitor_workspace.aks.name]infrastructure/modules/aks/law.tf (1)
49-55
: Approved: Data collection rule association for Container Insights is correctly implemented.
The association resource is defined properly with accurate references to the AKS cluster and the corresponding data collection rule. Consider adding an explicitdepends_on
if ordering becomes an issue, although the implicit dependency via references is generally sufficient.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
infrastructure/modules/aks/amw.tf
(1 hunks)infrastructure/modules/aks/law.tf
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
infrastructure/modules/aks/law.tf (1)
1-47
: Approved: Resource for Log Analytics data collection rule appears correctly configured.
The resource block uses proper interpolation and JSON encoding. All configuration aspects—from destinations to data flows and data sources—are logically arranged and consistent with best practices.
infrastructure/modules/aks/amw.tf
Outdated
resource "azurerm_monitor_data_collection_endpoint" "amw" { | ||
name = "${azurerm_monitor_workspace.aks.name}}-mdce" | ||
resource_group_name = azurerm_resource_group.aks.name | ||
location = azurerm_resource_group.aks.location | ||
kind = "Linux" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Fix string interpolation in data collection endpoint name.
The name attribute on line 2 has an extra closing curly bracket (}}
) which will result in a Terraform syntax error. Please update the interpolation to use a single closing bracket.
- name = "${azurerm_monitor_workspace.aks.name}}-mdce"
+ name = "${azurerm_monitor_workspace.aks.name}-mdce"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_monitor_data_collection_endpoint" "amw" { | |
name = "${azurerm_monitor_workspace.aks.name}}-mdce" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
kind = "Linux" | |
} | |
resource "azurerm_monitor_data_collection_endpoint" "amw" { | |
name = "${azurerm_monitor_workspace.aks.name}-mdce" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
kind = "Linux" | |
} |
infrastructure/modules/aks/amw.tf
Outdated
resource "azurerm_monitor_data_collection_rule_association" "amw" { | ||
name = "${azurerm_monitor_workspace.aks.name}}-mdcra" | ||
target_resource_id = azurerm_kubernetes_cluster.aks.id | ||
data_collection_rule_id = azurerm_monitor_data_collection_rule.amw.id | ||
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster." | ||
depends_on = [ | ||
azurerm_monitor_data_collection_rule.amw | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Adjust string interpolation in data collection rule association name.
Line 42 repeats the same interpolation error with an extra closing curly bracket. Correcting this will ensure proper resource naming and avoid potential failures during deployment.
- name = "${azurerm_monitor_workspace.aks.name}}-mdcra"
+ name = "${azurerm_monitor_workspace.aks.name}-mdcra"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_monitor_data_collection_rule_association" "amw" { | |
name = "${azurerm_monitor_workspace.aks.name}}-mdcra" | |
target_resource_id = azurerm_kubernetes_cluster.aks.id | |
data_collection_rule_id = azurerm_monitor_data_collection_rule.amw.id | |
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster." | |
depends_on = [ | |
azurerm_monitor_data_collection_rule.amw | |
] | |
resource "azurerm_monitor_data_collection_rule_association" "amw" { | |
name = "${azurerm_monitor_workspace.aks.name}-mdcra" | |
target_resource_id = azurerm_kubernetes_cluster.aks.id | |
data_collection_rule_id = azurerm_monitor_data_collection_rule.amw.id | |
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster." | |
depends_on = [ | |
azurerm_monitor_data_collection_rule.amw | |
] | |
} |
infrastructure/modules/aks/amw.tf
Outdated
resource "azurerm_monitor_data_collection_rule" "amw" { | ||
name = "${azurerm_monitor_workspace.aks.name}}-mdcr" | ||
resource_group_name = azurerm_resource_group.aks.name | ||
location = azurerm_resource_group.aks.location | ||
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.amw.id | ||
kind = "Linux" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Correct string interpolation in monitor data collection rule name.
Line 9 similarly contains an extra closing curly bracket (}}
) in the name attribute. Consistent naming is crucial for resource identification, so please fix it as shown below.
- name = "${azurerm_monitor_workspace.aks.name}}-mdcr"
+ name = "${azurerm_monitor_workspace.aks.name}-mdcr"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
resource "azurerm_monitor_data_collection_rule" "amw" { | |
name = "${azurerm_monitor_workspace.aks.name}}-mdcr" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.amw.id | |
kind = "Linux" | |
resource "azurerm_monitor_data_collection_rule" "amw" { | |
- name = "${azurerm_monitor_workspace.aks.name}}-mdcr" | |
+ name = "${azurerm_monitor_workspace.aks.name}-mdcr" | |
resource_group_name = azurerm_resource_group.aks.name | |
location = azurerm_resource_group.aks.location | |
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.amw.id | |
kind = "Linux" | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
infrastructure/modules/aks/amw-collection.tf
(1 hunks)infrastructure/modules/aks/law-collection.tf
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (4)
infrastructure/modules/aks/law-collection.tf (4)
1-11
: Good: Clear and Consistent Naming in Log Analytics Data Collection Rule.The resource "azurerm_monitor_data_collection_rule" for Log Analytics uses proper interpolation:
name = "${azurerm_log_analytics_workspace.aks.name}-mdcr"and clear references to the Log Analytics workspace. The configuration is clean and consistent.
23-44
: Good: Robust Extension Configuration for Container Insights.The
data_sources
block, including both the syslog configuration and the ContainerInsights extension (withjsonencode
), is well structured and clear. This design makes it easier to adjust settings (such as the collection interval or namespace filters) in the future.
49-55
: Approved: Association Resource is Configured Correctly.The "azurerm_monitor_data_collection_rule_association" resource for Log Analytics is defined using clear interpolation and appropriate resource references:
name = "${azurerm_log_analytics_workspace.aks.name}-mdcra" target_resource_id = azurerm_kubernetes_cluster.aks.id data_collection_rule_id = azurerm_monitor_data_collection_rule.law.idNo issues detected.
13-22
: Observation: Verify Data Flow Destination Reference.Within the first data_flow block, the destination is specified as:
destinations = ["${azurerm_log_analytics_workspace.aks.name}"]Ensure that this string interpolation correctly matches the expected destination identifier for your setup. It appears intentional, but please confirm it aligns with your overall monitoring configuration.
✅ Verification successful
Data Flow Destination Reference Verified
After reviewing the Terraform snippet, the string interpolation
destinations = ["${azurerm_log_analytics_workspace.aks.name}"]correctly references the Log Analytics Workspace name, aligning with the intended monitoring configuration.
- The interpolation syntax is valid for Terraform.
- There is no mismatch or misreference identified in this context.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
resource "azurerm_monitor_data_collection_endpoint" "amw" { | ||
name = "${azurerm_monitor_workspace.aks.name}}-mdce" | ||
resource_group_name = azurerm_resource_group.aks.name | ||
location = azurerm_resource_group.aks.location | ||
kind = "Linux" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Critical: Fix Extra Curly Braces in Interpolation for Data Collection Endpoint Name.
On line 2 the name is declared as:
name = "${azurerm_monitor_workspace.aks.name}}-mdce"
There is an extra closing brace. This will cause a syntax error during Terraform plan. Please update it to:
name = "${azurerm_monitor_workspace.aks.name}-mdce"
resource "azurerm_monitor_data_collection_rule" "amw" { | ||
name = "${azurerm_monitor_workspace.aks.name}}-mdcr" | ||
resource_group_name = azurerm_resource_group.aks.name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Critical: Correct Interpolation Syntax in Data Collection Rule Name.
The resource "azurerm_monitor_data_collection_rule" on line 9 uses the pattern:
name = "${azurerm_monitor_workspace.aks.name}}-mdcr"
The extra closing brace (}}
) should be removed to avoid errors. Revise this line to:
name = "${azurerm_monitor_workspace.aks.name}-mdcr"
resource "azurerm_monitor_data_collection_rule_association" "amw" { | ||
name = "${azurerm_monitor_workspace.aks.name}}-mdcra" | ||
target_resource_id = azurerm_kubernetes_cluster.aks.id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Critical: Fix Interpolation in Association Resource Name.
In the "azurerm_monitor_data_collection_rule_association" resource (line 42), the name is defined as:
name = "${azurerm_monitor_workspace.aks.name}}-mdcra"
Remove the extra curly brace so it becomes:
name = "${azurerm_monitor_workspace.aks.name}-mdcra"
Description
Summary by CodeRabbit