Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic setup for auth-at22-aks #1280

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Open

Basic setup for auth-at22-aks #1280

wants to merge 13 commits into from

Conversation

bengtfredh
Copy link
Member

@bengtfredh bengtfredh commented Jan 30, 2025

Description

  • early version of aks module
  • basic setup auth-at22-aks

Summary by CodeRabbit

  • New Features
    • Introduced an automated deployment workflow that supports manual triggering with customizable logging options.
    • Rolled out a new configuration for Azure Kubernetes Service, providing scalable cluster deployments with enhanced networking and image registry integration.
    • Established comprehensive monitoring and logging capabilities, integrating diagnostic settings and advanced metrics collection for improved observability.
    • Added support for data collection rules and associations to enhance monitoring of Kubernetes metrics and logs.
    • Implemented new Azure Monitor resources for data collection, including data collection endpoints and rules for enhanced observability.

Copy link
Contributor

coderabbitai bot commented Jan 30, 2025

📝 Walkthrough

Walkthrough

This pull request introduces a new GitHub Actions workflow to automate Terraform plan and apply procedures on the main branch, triggered by pushes, pull requests, and manual dispatch events. In tandem, several Terraform configuration files have been added or updated to provision an AKS cluster and its associated resources in Azure. These changes cover cluster deployment, network setup, monitoring with Azure Monitor, and role assignments for ACR pull access, as well as the necessary provider and variable definitions to support the infrastructure.

Changes

File(s) Change Summary
.github/workflows/auth-at22-aks-rg.yml Added a new GitHub Actions workflow with plan and deploy jobs that trigger on push, PR, and manual dispatch, executing Terraform plan and apply.
infrastructure/altinn-auth-test/auth-at22-aks-rg/{aks.tf, providers.tf, terraform.tfvars, variables.tf} Introduced new Terraform configurations for AKS module deployment including AKS settings, provider configuration with Azure AD/OIDC, and subscription ID variable definitions.
infrastructure/modules/aks/{aks.tf, flux.tf, network.tf, providers.tf, roles.tf, variables.tf, monitor.tf, amw-collection.tf, law-collection.tf} Added comprehensive Terraform modules for AKS cluster deployment covering resource groups, clusters (with node pools and maintenance settings), networking (VNet, subnets, public IPs/prefixes), AKS extensions (flux), role assignments for ACR pulling, monitoring setups, and Azure Monitor data collection rules and associations.

Sequence Diagram(s)

sequenceDiagram
    participant U as User/Event
    participant GH as GitHub Actions
    participant Plan as Terraform Plan Job
    participant Deploy as Terraform Apply Job

    U->>GH: Push/PR event or Manual Dispatch
    GH->>Plan: Trigger Terraform plan with workflow inputs
    Plan-->>GH: Return plan results
    alt Plan Successful
        GH->>Deploy: Trigger Terraform apply job
        Deploy-->>GH: Return apply results
    else Plan Fails
        GH-->>U: Notify failure
    end
Loading

Suggested reviewers

  • monteiro-renato
  • tjololo

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a3f068 and 10165c5.

📒 Files selected for processing (1)
  • infrastructure/altinn-auth-test/auth-at22-aks-rg/aks.tf (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • infrastructure/altinn-auth-test/auth-at22-aks-rg/aks.tf
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Analyze (go)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

github-actions bot commented Jan 30, 2025

Terraform environment test

Format and Style 🖌success

Initialization ⚙️success

Validation 🤖success

Validation Output

Success! The configuration is valid.


Plan 📖success

Show Plan

[Lines containing Refreshing state removed]
Acquiring state lock. This may take a few moments...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.aks.azurerm_kubernetes_cluster.aks will be created
  + resource "azurerm_kubernetes_cluster" "aks" {
      + automatic_upgrade_channel           = "patch"
      + current_kubernetes_version          = (known after apply)
      + dns_prefix                          = "auth-at22"
      + fqdn                                = (known after apply)
      + http_application_routing_zone_name  = (known after apply)
      + id                                  = (known after apply)
      + kube_admin_config                   = (sensitive value)
      + kube_admin_config_raw               = (sensitive value)
      + kube_config                         = (sensitive value)
      + kube_config_raw                     = (sensitive value)
      + kubernetes_version                  = "1.30"
      + location                            = "norwayeast"
      + name                                = "auth-at22-aks"
      + node_os_upgrade_channel             = "NodeImage"
      + node_resource_group                 = (known after apply)
      + node_resource_group_id              = (known after apply)
      + oidc_issuer_enabled                 = true
      + oidc_issuer_url                     = (known after apply)
      + portal_fqdn                         = (known after apply)
      + private_cluster_enabled             = false
      + private_cluster_public_fqdn_enabled = false
      + private_dns_zone_id                 = (known after apply)
      + private_fqdn                        = (known after apply)
      + resource_group_name                 = "auth-at22-aks-rg"
      + role_based_access_control_enabled   = true
      + run_command_enabled                 = true
      + sku_tier                            = "Free"
      + support_plan                        = "KubernetesOfficial"
      + workload_identity_enabled           = true

      + auto_scaler_profile (known after apply)

      + default_node_pool {
          + auto_scaling_enabled         = true
          + kubelet_disk_type            = (known after apply)
          + max_count                    = 6
          + max_pods                     = 200
          + min_count                    = 1
          + name                         = "syspool"
          + node_count                   = 1
          + node_labels                  = (known after apply)
          + only_critical_addons_enabled = true
          + orchestrator_version         = "1.30"
          + os_disk_size_gb              = (known after apply)
          + os_disk_type                 = "Managed"
          + os_sku                       = "AzureLinux"
          + scale_down_mode              = "Delete"
          + temporary_name_for_rotation  = "syspool99"
          + type                         = "VirtualMachineScaleSets"
          + ultra_ssd_enabled            = false
          + vm_size                      = "standard_b2s_v2"
          + vnet_subnet_id               = (known after apply)
          + workload_runtime             = (known after apply)
          + zones                        = [
              + "1",
              + "2",
              + "3",
            ]

          + upgrade_settings {
              + max_surge = "10%"
            }
        }

      + identity {
          + principal_id = (known after apply)
          + tenant_id    = (known after apply)
          + type         = "SystemAssigned"
        }

      + kubelet_identity (known after apply)

      + maintenance_window_auto_upgrade {
          + day_of_week = "Monday"
          + duration    = 5
          + frequency   = "Weekly"
          + interval    = 1
          + start_date  = (known after apply)
          + start_time  = "23:30"
          + utc_offset  = "+00:00"
        }

      + maintenance_window_node_os {
          + day_of_week = "Tuesday"
          + duration    = 5
          + frequency   = "Weekly"
          + interval    = 1
          + start_date  = (known after apply)
          + start_time  = "23:30"
          + utc_offset  = "+00:00"
        }

      + monitor_metrics {}

      + network_profile {
          + dns_service_ip      = "10.250.0.10"
          + ip_versions         = [
              + "IPv4",
              + "IPv6",
            ]
          + load_balancer_sku   = "standard"
          + network_data_plane  = "azure"
          + network_mode        = (known after apply)
          + network_plugin      = "azure"
          + network_plugin_mode = "overlay"
          + network_policy      = (known after apply)
          + outbound_type       = "loadBalancer"
          + pod_cidr            = (known after apply)
          + pod_cidrs           = [
              + "10.240.0.0/16",
              + "fd10:59f0:8c79:240::/64",
            ]
          + service_cidr        = (known after apply)
          + service_cidrs       = [
              + "10.250.0.0/24",
              + "fd10:59f0:8c79:250::/108",
            ]

          + load_balancer_profile {
              + backend_pool_type           = "NodeIPConfiguration"
              + effective_outbound_ips      = (known after apply)
              + idle_timeout_in_minutes     = 30
              + managed_outbound_ip_count   = (known after apply)
              + managed_outbound_ipv6_count = (known after apply)
              + outbound_ip_prefix_ids      = (known after apply)
              + outbound_ports_allocated    = 0
            }

          + nat_gateway_profile (known after apply)
        }

      + oms_agent {
          + log_analytics_workspace_id      = (known after apply)
          + msi_auth_for_monitoring_enabled = true
          + oms_agent_identity              = (known after apply)
        }

      + windows_profile (known after apply)
    }

  # module.aks.azurerm_kubernetes_cluster_extension.flux will be created
  + resource "azurerm_kubernetes_cluster_extension" "flux" {
      + aks_assigned_identity  = (known after apply)
      + cluster_id             = (known after apply)
      + configuration_settings = {
          + "autoUpgradeMinorVersion" = "true"
          + "multiTenancy.enforce"    = "false"
          + "useKubeletIdentity"      = "true"
        }
      + current_version        = (known after apply)
      + extension_type         = "microsoft.flux"
      + id                     = (known after apply)
      + name                   = "flux"
      + release_namespace      = (known after apply)
      + release_train          = (known after apply)
      + target_namespace       = (known after apply)
    }

  # module.aks.azurerm_kubernetes_cluster_node_pool.workpool will be created
  + resource "azurerm_kubernetes_cluster_node_pool" "workpool" {
      + auto_scaling_enabled  = true
      + id                    = (known after apply)
      + kubelet_disk_type     = (known after apply)
      + kubernetes_cluster_id = (known after apply)
      + max_count             = 6
      + max_pods              = 200
      + min_count             = 0
      + mode                  = "User"
      + name                  = "workpool"
      + node_count            = 0
      + node_labels           = (known after apply)
      + orchestrator_version  = "1.30"
      + os_disk_size_gb       = (known after apply)
      + os_disk_type          = "Managed"
      + os_sku                = "AzureLinux"
      + os_type               = "Linux"
      + priority              = "Regular"
      + scale_down_mode       = "Delete"
      + spot_max_price        = -1
      + ultra_ssd_enabled     = false
      + vm_size               = "standard_b2s_v2"
      + vnet_subnet_id        = (known after apply)
      + zones                 = [
          + "1",
          + "2",
          + "3",
        ]

      + upgrade_settings {
          + max_surge = "10%"
        }
    }

  # module.aks.azurerm_log_analytics_workspace.aks will be created
  + resource "azurerm_log_analytics_workspace" "aks" {
      + allow_resource_only_permissions = true
      + daily_quota_gb                  = -1
      + id                              = (known after apply)
      + internet_ingestion_enabled      = true
      + internet_query_enabled          = true
      + local_authentication_disabled   = false
      + location                        = "norwayeast"
      + name                            = "auth-at22-aks-law"
      + primary_shared_key              = (sensitive value)
      + resource_group_name             = "auth-at22-aks-rg"
      + retention_in_days               = 30
      + secondary_shared_key            = (sensitive value)
      + sku                             = (known after apply)
      + workspace_id                    = (known after apply)

      + identity {
          + principal_id = (known after apply)
          + tenant_id    = (known after apply)
          + type         = "SystemAssigned"
        }
    }

  # module.aks.azurerm_monitor_data_collection_endpoint.amw will be created
  + resource "azurerm_monitor_data_collection_endpoint" "amw" {
      + configuration_access_endpoint = (known after apply)
      + id                            = (known after apply)
      + immutable_id                  = (known after apply)
      + kind                          = "Linux"
      + location                      = "norwayeast"
      + logs_ingestion_endpoint       = (known after apply)
      + name                          = "auth-at22-aks-amw-mdce"
      + public_network_access_enabled = true
      + resource_group_name           = "auth-at22-aks-rg"
    }

  # module.aks.azurerm_monitor_data_collection_rule.amw will be created
  + resource "azurerm_monitor_data_collection_rule" "amw" {
      + data_collection_endpoint_id = (known after apply)
      + description                 = "DCR for Azure Monitor Metrics Profile (Managed Prometheus)"
      + id                          = (known after apply)
      + immutable_id                = (known after apply)
      + kind                        = "Linux"
      + location                    = "norwayeast"
      + name                        = "auth-at22-aks-amw-mdcr"
      + resource_group_name         = "auth-at22-aks-rg"

      + data_flow {
          + destinations = [
              + "auth-at22-aks-amw",
            ]
          + streams      = [
              + "Microsoft-PrometheusMetrics",
            ]
        }

      + data_sources {
          + prometheus_forwarder {
              + name    = "PrometheusDataSource"
              + streams = [
                  + "Microsoft-PrometheusMetrics",
                ]
            }
        }

      + destinations {
          + monitor_account {
              + monitor_account_id = (known after apply)
              + name               = "auth-at22-aks-amw"
            }
        }
    }

  # module.aks.azurerm_monitor_data_collection_rule.law will be created
  + resource "azurerm_monitor_data_collection_rule" "law" {
      + description         = "DCR for Azure Monitor Container Insights"
      + id                  = (known after apply)
      + immutable_id        = (known after apply)
      + location            = "norwayeast"
      + name                = "auth-at22-aks-law-mdcr"
      + resource_group_name = "auth-at22-aks-rg"

      + data_flow {
          + destinations = [
              + "auth-at22-aks-law",
            ]
          + streams      = [
              + "Microsoft-ContainerLog",
              + "Microsoft-ContainerLogV2",
              + "Microsoft-KubeEvents",
              + "Microsoft-KubePodInventory",
            ]
        }
      + data_flow {
          + destinations = [
              + "auth-at22-aks-law",
            ]
          + streams      = [
              + "Microsoft-Syslog",
            ]
        }

      + data_sources {
          + extension {
              + extension_json = jsonencode(
                    {
                      + dataCollectionSettings = {
                          + enableContainerLogV2   = true
                          + interval               = "5m"
                          + namespaceFilteringMode = "Exclude"
                          + namespaces             = [
                              + "kube-system",
                              + "gatekeeper-system",
                              + "azure-arc",
                            ]
                        }
                    }
                )
              + extension_name = "ContainerInsights"
              + name           = "ContainerInsightsExtension"
              + streams        = [
                  + "Microsoft-ContainerLog",
                  + "Microsoft-ContainerLogV2",
                  + "Microsoft-KubeEvents",
                  + "Microsoft-KubePodInventory",
                ]
            }
          + syslog {
              + facility_names = [
                  + "auth",
                  + "authpriv",
                  + "cron",
                  + "daemon",
                  + "mark",
                  + "kern",
                  + "local0",
                  + "local1",
                  + "local2",
                  + "local3",
                  + "local4",
                  + "local5",
                  + "local6",
                  + "local7",
                  + "lpr",
                  + "mail",
                  + "news",
                  + "syslog",
                  + "user",
                  + "uucp",
                ]
              + log_levels     = [
                  + "Error",
                  + "Critical",
                  + "Alert",
                  + "Emergency",
                ]
              + name           = "sysLogsDataSource"
              + streams        = [
                  + "Microsoft-Syslog",
                ]
            }
        }

      + destinations {
          + log_analytics {
              + name                  = "auth-at22-aks-law"
              + workspace_resource_id = (known after apply)
            }
        }
    }

  # module.aks.azurerm_monitor_data_collection_rule_association.amw will be created
  + resource "azurerm_monitor_data_collection_rule_association" "amw" {
      + data_collection_rule_id = (known after apply)
      + description             = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster."
      + id                      = (known after apply)
      + name                    = "auth-at22-aks-amw-mdcra"
      + target_resource_id      = (known after apply)
    }

  # module.aks.azurerm_monitor_data_collection_rule_association.law will be created
  + resource "azurerm_monitor_data_collection_rule_association" "law" {
      + data_collection_rule_id = (known after apply)
      + description             = "Association of container insights data collection rule. Deleting this association will break the data collection for this AKS Cluster."
      + id                      = (known after apply)
      + name                    = "auth-at22-aks-law-mdcra"
      + target_resource_id      = (known after apply)
    }

  # module.aks.azurerm_monitor_diagnostic_setting.aks will be created
  + resource "azurerm_monitor_diagnostic_setting" "aks" {
      + id                             = (known after apply)
      + log_analytics_destination_type = (known after apply)
      + name                           = "AKS-Diagnostics"
      + storage_account_id             = (known after apply)
      + target_resource_id             = (known after apply)

      + enabled_log {
          + category       = "kube-audit-admin"
            # (1 unchanged attribute hidden)
        }

      + metric {
          + category = "AllMetrics"
          + enabled  = false
        }
    }

  # module.aks.azurerm_monitor_workspace.aks will be created
  + resource "azurerm_monitor_workspace" "aks" {
      + default_data_collection_endpoint_id = (known after apply)
      + default_data_collection_rule_id     = (known after apply)
      + id                                  = (known after apply)
      + location                            = "norwayeast"
      + name                                = "auth-at22-aks-amw"
      + public_network_access_enabled       = true
      + query_endpoint                      = (known after apply)
      + resource_group_name                 = "auth-at22-aks-rg"
    }

  # module.aks.azurerm_public_ip.pip4 will be created
  + resource "azurerm_public_ip" "pip4" {
      + allocation_method       = "Static"
      + ddos_protection_mode    = "VirtualNetworkInherited"
      + domain_name_label       = "auth-at22"
      + fqdn                    = (known after apply)
      + id                      = (known after apply)
      + idle_timeout_in_minutes = 4
      + ip_address              = (known after apply)
      + ip_version              = "IPv4"
      + location                = "norwayeast"
      + name                    = "auth-at22-aks-pip4"
      + resource_group_name     = (known after apply)
      + sku                     = "Standard"
      + sku_tier                = "Regional"
      + zones                   = [
          + "1",
          + "2",
          + "3",
        ]
    }

  # module.aks.azurerm_public_ip.pip6 will be created
  + resource "azurerm_public_ip" "pip6" {
      + allocation_method       = "Static"
      + ddos_protection_mode    = "VirtualNetworkInherited"
      + domain_name_label       = "auth-at22"
      + fqdn                    = (known after apply)
      + id                      = (known after apply)
      + idle_timeout_in_minutes = 4
      + ip_address              = (known after apply)
      + ip_version              = "IPv6"
      + location                = "norwayeast"
      + name                    = "auth-at22-aks-pip6"
      + resource_group_name     = (known after apply)
      + sku                     = "Standard"
      + sku_tier                = "Regional"
      + zones                   = [
          + "1",
          + "2",
          + "3",
        ]
    }

  # module.aks.azurerm_public_ip_prefix.prefix4 will be created
  + resource "azurerm_public_ip_prefix" "prefix4" {
      + id                  = (known after apply)
      + ip_prefix           = (known after apply)
      + ip_version          = "IPv4"
      + location            = "norwayeast"
      + name                = "auth-at22-aks-prefix4"
      + prefix_length       = 31
      + resource_group_name = "auth-at22-aks-rg"
      + sku                 = "Standard"
      + sku_tier            = "Regional"
      + zones               = [
          + "1",
          + "2",
          + "3",
        ]
    }

  # module.aks.azurerm_public_ip_prefix.prefix6 will be created
  + resource "azurerm_public_ip_prefix" "prefix6" {
      + id                  = (known after apply)
      + ip_prefix           = (known after apply)
      + ip_version          = "IPv6"
      + location            = "norwayeast"
      + name                = "auth-at22-aks-prefix6"
      + prefix_length       = 127
      + resource_group_name = "auth-at22-aks-rg"
      + sku                 = "Standard"
      + sku_tier            = "Regional"
      + zones               = [
          + "1",
          + "2",
          + "3",
        ]
    }

  # module.aks.azurerm_resource_group.aks will be created
  + resource "azurerm_resource_group" "aks" {
      + id       = (known after apply)
      + location = "norwayeast"
      + name     = "auth-at22-aks-rg"
    }

  # module.aks.azurerm_role_assignment.aks_acrpull["/subscriptions/a6e9ee7d-2b65-41e1-adfb-0c8c23515cf9/resourceGroups/acr/providers/Microsoft.ContainerRegistry/registries/altinncr"] will be created
  + resource "azurerm_role_assignment" "aks_acrpull" {
      + condition_version                = (known after apply)
      + id                               = (known after apply)
      + name                             = (known after apply)
      + principal_id                     = (known after apply)
      + principal_type                   = (known after apply)
      + role_definition_id               = (known after apply)
      + role_definition_name             = "AcrPull"
      + scope                            = "/subscriptions/a6e9ee7d-2b65-41e1-adfb-0c8c23515cf9/resourceGroups/acr/providers/Microsoft.ContainerRegistry/registries/altinncr"
      + skip_service_principal_aad_check = true
    }

  # module.aks.azurerm_storage_account.aks will be created
  + resource "azurerm_storage_account" "aks" {
      + access_tier                        = (known after apply)
      + account_kind                       = "StorageV2"
      + account_replication_type           = "ZRS"
      + account_tier                       = "Standard"
      + allow_nested_items_to_be_public    = true
      + cross_tenant_replication_enabled   = false
      + default_to_oauth_authentication    = false
      + dns_endpoint_type                  = "Standard"
      + https_traffic_only_enabled         = true
      + id                                 = (known after apply)
      + infrastructure_encryption_enabled  = false
      + is_hns_enabled                     = true
      + large_file_share_enabled           = (known after apply)
      + local_user_enabled                 = true
      + location                           = "norwayeast"
      + min_tls_version                    = "TLS1_2"
      + name                               = (known after apply)
      + nfsv3_enabled                      = false
      + primary_access_key                 = (sensitive value)
      + primary_blob_connection_string     = (sensitive value)
      + primary_blob_endpoint              = (known after apply)
      + primary_blob_host                  = (known after apply)
      + primary_blob_internet_endpoint     = (known after apply)
      + primary_blob_internet_host         = (known after apply)
      + primary_blob_microsoft_endpoint    = (known after apply)
      + primary_blob_microsoft_host        = (known after apply)
      + primary_connection_string          = (sensitive value)
      + primary_dfs_endpoint               = (known after apply)
      + primary_dfs_host                   = (known after apply)
      + primary_dfs_internet_endpoint      = (known after apply)
      + primary_dfs_internet_host          = (known after apply)
      + primary_dfs_microsoft_endpoint     = (known after apply)
      + primary_dfs_microsoft_host         = (known after apply)
      + primary_file_endpoint              = (known after apply)
      + primary_file_host                  = (known after apply)
      + primary_file_internet_endpoint     = (known after apply)
      + primary_file_internet_host         = (known after apply)
      + primary_file_microsoft_endpoint    = (known after apply)
      + primary_file_microsoft_host        = (known after apply)
      + primary_location                   = (known after apply)
      + primary_queue_endpoint             = (known after apply)
      + primary_queue_host                 = (known after apply)
      + primary_queue_microsoft_endpoint   = (known after apply)
      + primary_queue_microsoft_host       = (known after apply)
      + primary_table_endpoint             = (known after apply)
      + primary_table_host                 = (known after apply)
      + primary_table_microsoft_endpoint   = (known after apply)
      + primary_table_microsoft_host       = (known after apply)
      + primary_web_endpoint               = (known after apply)
      + primary_web_host                   = (known after apply)
      + primary_web_internet_endpoint      = (known after apply)
      + primary_web_internet_host          = (known after apply)
      + primary_web_microsoft_endpoint     = (known after apply)
      + primary_web_microsoft_host         = (known after apply)
      + public_network_access_enabled      = true
      + queue_encryption_key_type          = "Service"
      + resource_group_name                = "auth-at22-aks-rg"
      + secondary_access_key               = (sensitive value)
      + secondary_blob_connection_string   = (sensitive value)
      + secondary_blob_endpoint            = (known after apply)
      + secondary_blob_host                = (known after apply)
      + secondary_blob_internet_endpoint   = (known after apply)
      + secondary_blob_internet_host       = (known after apply)
      + secondary_blob_microsoft_endpoint  = (known after apply)
      + secondary_blob_microsoft_host      = (known after apply)
      + secondary_connection_string        = (sensitive value)
      + secondary_dfs_endpoint             = (known after apply)
      + secondary_dfs_host                 = (known after apply)
      + secondary_dfs_internet_endpoint    = (known after apply)
      + secondary_dfs_internet_host        = (known after apply)
      + secondary_dfs_microsoft_endpoint   = (known after apply)
      + secondary_dfs_microsoft_host       = (known after apply)
      + secondary_file_endpoint            = (known after apply)
      + secondary_file_host                = (known after apply)
      + secondary_file_internet_endpoint   = (known after apply)
      + secondary_file_internet_host       = (known after apply)
      + secondary_file_microsoft_endpoint  = (known after apply)
      + secondary_file_microsoft_host      = (known after apply)
      + secondary_location                 = (known after apply)
      + secondary_queue_endpoint           = (known after apply)
      + secondary_queue_host               = (known after apply)
      + secondary_queue_microsoft_endpoint = (known after apply)
      + secondary_queue_microsoft_host     = (known after apply)
      + secondary_table_endpoint           = (known after apply)
      + secondary_table_host               = (known after apply)
      + secondary_table_microsoft_endpoint = (known after apply)
      + secondary_table_microsoft_host     = (known after apply)
      + secondary_web_endpoint             = (known after apply)
      + secondary_web_host                 = (known after apply)
      + secondary_web_internet_endpoint    = (known after apply)
      + secondary_web_internet_host        = (known after apply)
      + secondary_web_microsoft_endpoint   = (known after apply)
      + secondary_web_microsoft_host       = (known after apply)
      + sftp_enabled                       = false
      + shared_access_key_enabled          = true
      + table_encryption_key_type          = "Service"

      + blob_properties (known after apply)

      + network_rules {
          + bypass                     = [
              + "AzureServices",
            ]
          + default_action             = "Deny"
          + ip_rules                   = (known after apply)
          + virtual_network_subnet_ids = (known after apply)
        }

      + queue_properties (known after apply)

      + routing (known after apply)

      + share_properties (known after apply)

      + static_website (known after apply)
    }

  # module.aks.azurerm_subnet.aks["aks_syspool"] will be created
  + resource "azurerm_subnet" "aks" {
      + address_prefixes                              = [
          + "fd0a:7204:c37f:901::/64",
          + "10.202.72.0/24",
        ]
      + default_outbound_access_enabled               = true
      + id                                            = (known after apply)
      + name                                          = "aks_syspool"
      + private_endpoint_network_policies             = "Disabled"
      + private_link_service_network_policies_enabled = true
      + resource_group_name                           = "auth-at22-aks-rg"
      + virtual_network_name                          = "auth-at22-aks-vnet"
    }

  # module.aks.azurerm_subnet.aks["aks_workpool"] will be created
  + resource "azurerm_subnet" "aks" {
      + address_prefixes                              = [
          + "fd0a:7204:c37f:902::/64",
          + "10.202.73.0/24",
        ]
      + default_outbound_access_enabled               = true
      + id                                            = (known after apply)
      + name                                          = "aks_workpool"
      + private_endpoint_network_policies             = "Disabled"
      + private_link_service_network_policies_enabled = true
      + resource_group_name                           = "auth-at22-aks-rg"
      + virtual_network_name                          = "auth-at22-aks-vnet"
    }

  # module.aks.azurerm_virtual_network.aks will be created
  + resource "azurerm_virtual_network" "aks" {
      + address_space                  = [
          + "10.202.72.0/21",
          + "fd0a:7204:c37f:900::/56",
        ]
      + dns_servers                    = (known after apply)
      + guid                           = (known after apply)
      + id                             = (known after apply)
      + location                       = "norwayeast"
      + name                           = "auth-at22-aks-vnet"
      + private_endpoint_vnet_policies = "Disabled"
      + resource_group_name            = "auth-at22-aks-rg"
      + subnet                         = (known after apply)
    }

  # module.aks.random_id.aks will be created
  + resource "random_id" "aks" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 3
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

Plan: 22 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan.out

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan.out"
Releasing state lock. This may take a few moments...

Context Values
Pusher @bengtfredh
Action pull_request
Working Directory ./infrastructure/altinn-auth-test/auth-at22-aks-rg
State File github.com/altinn/altinn-platform/environments/test/auth-at22-aks-rg.tfstate
Plan File github.com_altinn_altinn-platform_environments_test_auth-at22-aks-rg.tfstate.tfplan

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (8)
infrastructure/altinn-auth-test/auth-at22-aks-rg/variables.tf (1)

1-4: Enhance variable description for better clarity.

While the variable definition is correct, consider enhancing the description to be more specific about its purpose and format.

 variable "subscription_id" {
   type        = string
-  description = "Subscription id to deploy services"
+  description = "Azure subscription ID (UUID) where resources will be deployed"
 }
infrastructure/modules/aks/flux.tf (1)

1-5: Consider adding Flux configuration settings.

The current Flux extension setup is minimal. Consider adding:

  1. Specific version configuration
  2. GitOps repository configuration
  3. Flux configuration settings for better control over the deployment process

Example configuration block to consider adding:

configuration_settings = {
  "helm-controller.enabled"    = true
  "source-controller.enabled"  = true
  "kustomize-controller.enabled" = true
  "notification-controller.enabled" = true
  "flux.version"              = "2.0.0"
}
infrastructure/altinn-auth-test/auth-at22-aks-rg/providers.tf (1)

17-22: Consider additional essential resource providers.

The current list of resource providers is good, but consider adding these commonly needed providers for AKS:

   resource_providers_to_register = [
     "Microsoft.Monitor",
     "Microsoft.AlertsManagement",
     "Microsoft.Dashboard",
-    "Microsoft.KubernetesConfiguration"
+    "Microsoft.KubernetesConfiguration",
+    "Microsoft.ContainerService",
+    "Microsoft.OperationsManagement",
+    "Microsoft.OperationalInsights"
   ]
infrastructure/altinn-auth-test/auth-at22-aks-rg/aks.tf (1)

19-24: Review system node pool configuration.

The system node pool configuration has potential issues:

  1. standard_b2s_v2 might not be suitable for system workloads
  2. Min count of 1 node doesn't provide high availability

Consider these changes:

     syspool = {
-      vm_size              = "standard_b2s_v2"
+      vm_size              = "Standard_D4s_v3"  # More suitable for system workloads
       auto_scaling_enabled = "true"
       node_count           = "1"
-      min_count            = "1"
+      min_count            = "2"  # Ensure high availability
       max_count            = "6"
infrastructure/modules/aks/variables.tf (1)

17-20: Add validation for aks_sku_tier and kubernetes_version.

Consider adding validation rules to ensure:

  • aks_sku_tier is either "Free" or "Standard"
  • kubernetes_version follows semantic versioning pattern
 variable "aks_sku_tier" {
   type        = string
   description = "Kubernetes sku"
+  validation {
+    condition     = contains(["Free", "Standard"], var.aks_sku_tier)
+    error_message = "aks_sku_tier must be either Free or Standard."
+  }
 }

 variable "kubernetes_version" {
   type        = string
   description = "Kubernetes version"
+  validation {
+    condition     = can(regex("^\\d+\\.\\d+\\.\\d+$", var.kubernetes_version))
+    error_message = "kubernetes_version must follow semantic versioning (e.g., 1.25.0)."
+  }
 }

Also applies to: 21-24

infrastructure/modules/aks/network.tf (1)

24-25: Potential DNS conflict with identical domain name labels.

Both IPv4 and IPv6 public IPs use the same domain name label. This might cause DNS resolution issues. Consider using distinct labels or a suffix for IPv6.

-  domain_name_label   = "${var.prefix}-${var.environment}"
+  domain_name_label   = "${var.prefix}-${var.environment}-v4"

-  domain_name_label   = "${var.prefix}-${var.environment}"
+  domain_name_label   = "${var.prefix}-${var.environment}-v6"

Also applies to: 35-36

.github/workflows/auth-at22-aks-rg.yml (2)

48-48: Add environment protection rules.

The test environment lacks protection rules. Consider adding:

  • Required reviewers
  • Wait timer
  • Branch restrictions

Also applies to: 68-68


64-64: Specify Terraform version constraint.

Using latest for Terraform version could lead to unexpected behavior if a new version introduces breaking changes. Consider pinning to a specific version or version range.

-          tf_version: latest
+          tf_version: "~> 1.5.0"

Also applies to: 85-85

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6195a3d and 63fe602.

📒 Files selected for processing (12)
  • .github/workflows/auth-at22-aks-rg.yml (1 hunks)
  • infrastructure/altinn-auth-test/auth-at22-aks-rg/aks.tf (1 hunks)
  • infrastructure/altinn-auth-test/auth-at22-aks-rg/providers.tf (1 hunks)
  • infrastructure/altinn-auth-test/auth-at22-aks-rg/terraform.tfvars (1 hunks)
  • infrastructure/altinn-auth-test/auth-at22-aks-rg/variables.tf (1 hunks)
  • infrastructure/modules/aks/aks.tf (1 hunks)
  • infrastructure/modules/aks/data.tf (1 hunks)
  • infrastructure/modules/aks/flux.tf (1 hunks)
  • infrastructure/modules/aks/network.tf (1 hunks)
  • infrastructure/modules/aks/providers.tf (1 hunks)
  • infrastructure/modules/aks/roles.tf (1 hunks)
  • infrastructure/modules/aks/variables.tf (1 hunks)
✅ Files skipped from review due to trivial changes (3)
  • infrastructure/altinn-auth-test/auth-at22-aks-rg/terraform.tfvars
  • infrastructure/modules/aks/data.tf
  • infrastructure/modules/aks/providers.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/aks.tf

[HIGH] 6-86: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources

(CKV_AZURE_227)


[HIGH] 88-109: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources

(CKV_AZURE_227)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Analyze (go)
🔇 Additional comments (5)
infrastructure/modules/aks/roles.tf (1)

1-9: Review security implications of skipping AAD check.

The configuration skips the Azure AD service principal check, which could potentially impact security. Please verify if this is intentionally required for your use case.

Also, consider adding tags for better resource tracking:

 resource "azurerm_role_assignment" "aks_acrpull" {
   for_each                         = toset(var.aks_acrpull_scopes)
   principal_id                     = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
   role_definition_name             = "AcrPull"
   scope                            = each.value
   skip_service_principal_aad_check = true
+  timeouts {
+    create = "30m"
+  }
 
   depends_on = [azurerm_kubernetes_cluster.aks]
 }
infrastructure/altinn-auth-test/auth-at22-aks-rg/providers.tf (1)

4-6: Update azurerm provider version.

The specified version ~> 4.16.0 is outdated. Consider updating to a more recent version for security patches and new features.

     azurerm = {
       source  = "hashicorp/azurerm"
-      version = "~> 4.16.0"
+      version = "~> 4.79.0"  # Or the latest stable version
     }
✅ Verification successful

Align provider version with project standard

The specific version constraint ~> 4.16.0 is unnecessarily restrictive. The rest of the codebase consistently uses ~> 4.0, which allows for minor version updates while maintaining major version compatibility. Recommend changing to match the project standard:

     azurerm = {
       source  = "hashicorp/azurerm"
-      version = "~> 4.16.0"
+      version = "~> 4.0"
     }
🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Find all Terraform files
echo "=== Terraform Files ==="
fd -e tf

echo -e "\n=== Provider Versions ==="
# Search for provider version specifications
rg "version.*=.*\"~>.*\"" -A 2 -B 2

echo -e "\n=== README Content ==="
# Check if there's any documentation about provider versions
cat README.md 2>/dev/null || echo "README.md not found"

echo -e "\n=== Provider Configurations ==="
# Look for provider configurations and features
rg "provider \"azurerm\"" -A 10

Length of output: 15105

infrastructure/modules/aks/network.tf (1)

19-20: Consider resource group placement for public IPs.

The public IPs are created in the node resource group while the IP prefixes are in the main resource group. This split might complicate resource management. Consider using the same resource group for all networking resources.

Also applies to: 29-30

infrastructure/modules/aks/aks.tf (2)

46-48: Verify overlay networking mode compatibility.

The network plugin mode is set to "overlay". This mode:

  • Requires containerd runtime
  • Has different performance characteristics
  • May affect existing workloads
    Please verify these requirements are met.

6-86: ⚠️ Potential issue

Enable disk encryption for AKS cluster.

The cluster configuration is missing disk encryption settings. This is flagged as a security concern by static analysis.

 resource "azurerm_kubernetes_cluster" "aks" {
   # ... existing configuration ...
+  disk_encryption_set_id = azurerm_disk_encryption_set.aks.id
 }
🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 6-86: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources

(CKV_AZURE_227)

Comment on lines 1 to 36
module "aks" {
source = "../../modules/aks"
prefix = "auth"
environment = "at22"
subscription_id = var.subscription_id
location = "norwayeast"
aks_sku_tier = "Free"
kubernetes_version = "1.30"
vnet_address_space = [
"10.202.72.0/21",
"fd0a:7204:c37f:900::/56"
]
subnet_address_prefixes = {
aks_syspool = ["fdb8:8b5f:00f0:901::/64", "10.202.72.0/24"]
aks_workpool = ["fdb8:8b5f:00f0:902::/64", "10.202.73.0/24"]
}
pool_configs = {
syspool = {
vm_size = "standard_b2s_v2"
auto_scaling_enabled = "true"
node_count = "1"
min_count = "1"
max_count = "6"
}
workpool = {
vm_size = "standard_b2s_v2"
auto_scaling_enabled = "true"
node_count = "0"
min_count = "0"
max_count = "6"
}
}
aks_acrpull_scopes = [
"/subscriptions/a6e9ee7d-2b65-41e1-adfb-0c8c23515cf9/resourceGroups/acr/providers/Microsoft.ContainerRegistry/registries/altinncr"
]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add essential AKS configurations.

The current configuration is missing several important AKS settings that should be considered for production use.

Consider adding these configurations:

  network_policy                 = "azure"
  network_plugin                 = "azure"
  network_plugin_mode           = "overlay"
  service_cidr                  = "10.0.0.0/16"
  dns_service_ip                = "10.0.0.10"
  
  maintenance_window = {
    allowed = [
      {
        day   = "Sunday"
        hours = [21, 22, 23]
      }
    ]
  }
  
  monitoring_metrics_enabled = true
  
  azure_policy_enabled = true
  
  key_vault_secrets_provider = {
    enabled                  = true
    rotation_poll_interval   = "2m"
  }

subscription_id = var.subscription_id
location = "norwayeast"
aks_sku_tier = "Free"
kubernetes_version = "1.30"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix invalid Kubernetes version.

The specified version "1.30" is invalid. As of January 2025, this version doesn't exist.

-  kubernetes_version = "1.30"
+  kubernetes_version = "1.28"  # Or the latest stable version supported by Azure
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kubernetes_version = "1.30"
kubernetes_version = "1.28" # Or the latest stable version supported by Azure

Comment on lines +34 to +41
type = map(object({
vm_size = string
auto_scaling_enabled = string
node_count = string
min_count = string
max_count = string
}))
description = "variables for nodepools"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix data types in pool_configs variable.

The auto_scaling_enabled and node_count fields should use appropriate data types:

  • auto_scaling_enabled should be boolean instead of string
  • node_count, min_count, and max_count should be number instead of string
 variable "pool_configs" {
   type = map(object({
     vm_size              = string
-    auto_scaling_enabled = string
-    node_count           = string
-    min_count            = string
-    max_count            = string
+    auto_scaling_enabled = bool
+    node_count           = number
+    min_count           = number
+    max_count           = number
   }))
   description = "variables for nodepools"
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
type = map(object({
vm_size = string
auto_scaling_enabled = string
node_count = string
min_count = string
max_count = string
}))
description = "variables for nodepools"
type = map(object({
vm_size = string
auto_scaling_enabled = bool
node_count = number
min_count = number
max_count = number
}))
description = "variables for nodepools"

TF_STATE_NAME: auth-at22-aks-rg.tfstate
TF_PROJECT: ./infrastructure/altinn-auth-test/auth-at22-aks-rg
ARM_CLIENT_ID: ${{ vars.TF_AZURE_CLIENT_ID }}
ARM_SUBSCRIPTION_ID: 37bac63a-b964-46b2-8de8-ba93c432ea1f
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Move subscription ID to GitHub variables.

The Azure subscription ID is hardcoded in the workflow file. This should be moved to GitHub variables for better security and maintainability.

-  ARM_SUBSCRIPTION_ID: 37bac63a-b964-46b2-8de8-ba93c432ea1f
+  ARM_SUBSCRIPTION_ID: ${{ vars.TF_AZURE_SUBSCRIPTION_ID }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ARM_SUBSCRIPTION_ID: 37bac63a-b964-46b2-8de8-ba93c432ea1f
ARM_SUBSCRIPTION_ID: ${{ vars.TF_AZURE_SUBSCRIPTION_ID }}

Comment on lines +66 to +68
# azure_active_directory_role_based_access_control {
# azure_rbac_enabled = true
# }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Uncomment and configure RBAC.

The RBAC configuration is commented out. Azure RBAC should be enabled for better security.

-  # azure_active_directory_role_based_access_control {
-  #   azure_rbac_enabled = true
-  # }
+  azure_active_directory_role_based_access_control {
+    azure_rbac_enabled = true
+    managed = true
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# azure_active_directory_role_based_access_control {
# azure_rbac_enabled = true
# }
azure_active_directory_role_based_access_control {
azure_rbac_enabled = true
managed = true
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
infrastructure/modules/aks/aks.tf (1)

72-74: ⚠️ Potential issue

Uncomment and configure RBAC.

The RBAC configuration is commented out. Azure RBAC should be enabled for better security.

-  # azure_active_directory_role_based_access_control {
-  #   azure_rbac_enabled = true
-  # }
+  azure_active_directory_role_based_access_control {
+    azure_rbac_enabled = true
+    managed = true
+  }
🧹 Nitpick comments (1)
infrastructure/modules/aks/monitor.tf (1)

11-15: Consider adding tags for resource management.

Both monitoring resources would benefit from having tags for better resource management and cost allocation.

Apply this diff to add essential tags:

 resource "azurerm_monitor_workspace" "aks" {
   name                = "${var.prefix}-${var.environment}-aks-amw"
   resource_group_name = azurerm_resource_group.aks.name
   location            = azurerm_resource_group.aks.location
+  tags = {
+    Environment = var.environment
+    Terraform   = "true"
+    Service     = "AKS Monitoring"
+  }
 }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 63fe602 and c55f415.

📒 Files selected for processing (2)
  • infrastructure/modules/aks/aks.tf (1 hunks)
  • infrastructure/modules/aks/monitor.tf (1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/aks.tf

[HIGH] 6-92: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources

(CKV_AZURE_227)


[HIGH] 94-115: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources

(CKV_AZURE_227)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (3)
infrastructure/modules/aks/monitor.tf (1)

1-9: Consider increasing the retention period for Log Analytics Workspace.

The current retention period of 30 days might be insufficient for compliance requirements and long-term analysis. Consider increasing it based on your compliance needs and data analysis requirements.

infrastructure/modules/aks/aks.tf (2)

25-43: Consider security implications of max_pods setting.

The max_pods setting of 200 per node is relatively high. This could potentially impact:

  1. Node stability under high load
  2. Network performance
  3. Security isolation between pods

Consider reducing this value based on your workload requirements and node size.


48-52: Verify IPv6 configuration compatibility.

The network configuration includes both IPv4 and IPv6. Ensure that:

  1. All dependent services support IPv6
  2. Network security groups are properly configured for IPv6
  3. Load balancer and ingress controllers support IPv6
✅ Verification successful

IPv6 configuration is properly implemented and supported

The verification confirms that:

  • Azure Load Balancer (standard SKU) and Traefik ingress controller are configured for IPv6
  • Network resources include proper IPv6 public IPs and prefixes
  • Dual-stack networking is correctly configured with appropriate CIDR ranges
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for IPv6-related configurations in other files
rg "IPv6|fd10:" -A 5

Length of output: 3939

Comment on lines +6 to +24
resource "azurerm_kubernetes_cluster" "aks" {
lifecycle {
ignore_changes = [
workload_autoscaler_profile,
default_node_pool[0].node_count,
windows_profile,
]
}
name = "${var.prefix}-${var.environment}-aks"
location = azurerm_resource_group.aks.location
resource_group_name = azurerm_resource_group.aks.name
dns_prefix = "${var.prefix}-${var.environment}"
sku_tier = var.aks_sku_tier
kubernetes_version = var.kubernetes_version
automatic_upgrade_channel = "patch"
node_os_upgrade_channel = "NodeImage"
oidc_issuer_enabled = true
workload_identity_enabled = true

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Enable disk encryption for improved security.

Based on the static analysis, the AKS cluster should have disk encryption enabled for temp disks, caches, and data flows.

Add the following configuration to enable encryption:

   workload_identity_enabled = true
+  disk_encryption_set_id = azurerm_disk_encryption_set.aks.id
+  
+  security_profile {
+    encryption {
+      enable_encryption_at_host = true
+    }
+  }

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 6-92: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources

(CKV_AZURE_227)

Comment on lines +94 to +115
resource "azurerm_kubernetes_cluster_node_pool" "workpool" {
lifecycle {
ignore_changes = [
node_count,
]
}
name = "workpool"
os_sku = "AzureLinux"
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
vnet_subnet_id = azurerm_subnet.aks["aks_workpool"].id
max_pods = 200
auto_scaling_enabled = var.pool_configs["workpool"].auto_scaling_enabled
node_count = var.pool_configs["workpool"].node_count
vm_size = var.pool_configs["workpool"].vm_size
min_count = var.pool_configs["workpool"].min_count
max_count = var.pool_configs["workpool"].max_count
zones = ["1", "2", "3"]
orchestrator_version = var.kubernetes_version
upgrade_settings {
max_surge = "10%"
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Apply consistent encryption settings to the workpool.

The workpool node pool should have the same encryption settings as the main cluster for consistent security.

Add disk encryption configuration to match the main cluster:

   upgrade_settings {
     max_surge = "10%"
   }
+  enable_host_encryption = true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_kubernetes_cluster_node_pool" "workpool" {
lifecycle {
ignore_changes = [
node_count,
]
}
name = "workpool"
os_sku = "AzureLinux"
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
vnet_subnet_id = azurerm_subnet.aks["aks_workpool"].id
max_pods = 200
auto_scaling_enabled = var.pool_configs["workpool"].auto_scaling_enabled
node_count = var.pool_configs["workpool"].node_count
vm_size = var.pool_configs["workpool"].vm_size
min_count = var.pool_configs["workpool"].min_count
max_count = var.pool_configs["workpool"].max_count
zones = ["1", "2", "3"]
orchestrator_version = var.kubernetes_version
upgrade_settings {
max_surge = "10%"
}
}
resource "azurerm_kubernetes_cluster_node_pool" "workpool" {
lifecycle {
ignore_changes = [
node_count,
]
}
name = "workpool"
os_sku = "AzureLinux"
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
vnet_subnet_id = azurerm_subnet.aks["aks_workpool"].id
max_pods = 200
auto_scaling_enabled = var.pool_configs["workpool"].auto_scaling_enabled
node_count = var.pool_configs["workpool"].node_count
vm_size = var.pool_configs["workpool"].vm_size
min_count = var.pool_configs["workpool"].min_count
max_count = var.pool_configs["workpool"].max_count
zones = ["1", "2", "3"]
orchestrator_version = var.kubernetes_version
upgrade_settings {
max_surge = "10%"
}
enable_host_encryption = true
}
🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 94-115: Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources

(CKV_AZURE_227)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c55f415 and e4da74a.

📒 Files selected for processing (2)
  • infrastructure/modules/aks/flux.tf (1 hunks)
  • infrastructure/modules/aks/monitor.tf (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • infrastructure/modules/aks/flux.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/monitor.tf

[HIGH] 21-39: Ensure that Storage blobs restrict public access

(CKV_AZURE_190)


[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key

(CKV2_AZURE_1)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Analyze (go)
🔇 Additional comments (4)
infrastructure/modules/aks/monitor.tf (4)

1-9: Log Analytics Workspace Configuration is Solid

The configuration for the Azure Log Analytics Workspace is implemented correctly with a consistent naming convention, retention period, and a system-assigned identity.


11-15: Monitor Workspace Setup is Clear

The Azure Monitor Workspace is correctly defined with appropriate references to the resource group and location.


17-19: Random ID Generation is Correct

The random ID resource is straightforward and correctly set up for generating a unique suffix, aiding in storage account name uniqueness.


41-54: Diagnostic Setting Configuration Looks Appropriate

The diagnostic setting for the AKS cluster correctly targets the cluster and links to the storage account for log retention. Note that metrics collection is explicitly disabled, which appears intentional. Ensure that the referenced azurerm_kubernetes_cluster.aks resource exists in your deployment context.

Comment on lines +21 to +39
resource "azurerm_storage_account" "aks" {
name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
account_tier = "Standard"
account_replication_type = "ZRS"
account_kind = "StorageV2"
min_tls_version = "TLS1_2"
is_hns_enabled = true

network_rules {
default_action = "Deny"
bypass = ["AzureServices"]
ip_rules = [
]
virtual_network_subnet_ids = [
]
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Security: Restrict Public Access and Consider CMK Encryption

The Azure Storage Account is configured with standard parameters. However, please note that:

  • Although network rules are set to "Deny" by default, it is best practice to explicitly restrict public access by adding the attribute allow_blob_public_access = false.
  • Additionally, Checkov has flagged that for critical data, encryption using a Customer Managed Key (CMK) might be required. Please evaluate if this applies to your use case and add a CMK configuration if necessary.

Suggested diff to add explicit public access restriction:

 resource "azurerm_storage_account" "aks" {
   name                     = "${var.prefix}${var.environment}akslog${random_id.aks.hex}"
   resource_group_name      = azurerm_resource_group.aks.name
   location                 = azurerm_resource_group.aks.location
   account_tier             = "Standard"
   account_replication_type = "ZRS"
   account_kind             = "StorageV2"
   min_tls_version          = "TLS1_2"
+  allow_blob_public_access = false
   is_hns_enabled           = true
   network_rules {
     default_action = "Deny"
     bypass         = ["AzureServices"]
     ip_rules = [
     ]
     virtual_network_subnet_ids = [
     ]
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_storage_account" "aks" {
name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
account_tier = "Standard"
account_replication_type = "ZRS"
account_kind = "StorageV2"
min_tls_version = "TLS1_2"
is_hns_enabled = true
network_rules {
default_action = "Deny"
bypass = ["AzureServices"]
ip_rules = [
]
virtual_network_subnet_ids = [
]
}
}
resource "azurerm_storage_account" "aks" {
name = "${var.prefix}${var.environment}akslog${random_id.aks.hex}"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
account_tier = "Standard"
account_replication_type = "ZRS"
account_kind = "StorageV2"
min_tls_version = "TLS1_2"
allow_blob_public_access = false
is_hns_enabled = true
network_rules {
default_action = "Deny"
bypass = ["AzureServices"]
ip_rules = [
]
virtual_network_subnet_ids = [
]
}
}
🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 21-39: Ensure that Storage blobs restrict public access

(CKV_AZURE_190)


[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key

(CKV2_AZURE_1)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e4da74a and c3ee49e.

📒 Files selected for processing (2)
  • infrastructure/modules/aks/monitor.tf (1 hunks)
  • infrastructure/modules/aks/prometheus.tf (1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
infrastructure/modules/aks/monitor.tf

[HIGH] 21-39: Ensure that Storage blobs restrict public access

(CKV_AZURE_190)


[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key

(CKV2_AZURE_1)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Analyze (go)
🔇 Additional comments (5)
infrastructure/modules/aks/monitor.tf (5)

1-9: Log Analytics Workspace Configuration Looks Good
The configuration for the Log Analytics Workspace is clear and follows a predictable naming scheme with a 30‐day retention period.


11-15: Monitor Workspace Resource
The minimal configuration for the monitor workspace is acceptable given the current requirements.


17-19: Random ID Resource – LGTM
This resource generates a short unique identifier as intended.


41-54: Monitor Diagnostic Setting – Verification of Log and Metric Configuration
The diagnostic setting correctly targets the AKS cluster and routes logs to the storage account. Please verify that the selected log category ("kube-audit-admin") and the disabled metric collection are intentional choices. Additional log categories or metric collections may be required based on your operational needs.


21-39: 🛠️ Refactor suggestion

Storage Account – Enhance Security Settings
While the storage account configuration is functionally correct, two security enhancements are recommended:

  1. Restrict Public Blob Access: Explicitly add allow_blob_public_access = false to ensure that blobs are not inadvertently exposed.
  2. Consider Encryption with a Customer Managed Key: If the storage account will hold critical data, configuring encryption with a customer managed key should be evaluated and added as needed.

Proposed diff snippet for adding the public access restriction:

 resource "azurerm_storage_account" "aks" {
   name                     = "${var.prefix}${var.environment}akslog${random_id.aks.hex}"
   resource_group_name      = azurerm_resource_group.aks.name
   location                 = azurerm_resource_group.aks.location
   account_tier             = "Standard"
   account_replication_type = "ZRS"
   account_kind             = "StorageV2"
   min_tls_version          = "TLS1_2"
   is_hns_enabled           = true
+  allow_blob_public_access = false
   network_rules {
     default_action = "Deny"
     bypass         = ["AzureServices"]
     ip_rules = [
     ]
     virtual_network_subnet_ids = [
     ]
   }
 }

Likely invalid or redundant comment.

🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 21-39: Ensure that Storage blobs restrict public access

(CKV_AZURE_190)


[HIGH] 21-39: Ensure storage for critical data are encrypted with Customer Managed Key

(CKV2_AZURE_1)

Comment on lines 1 to 6
resource "azurerm_monitor_data_collection_endpoint" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}}-mdce"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
kind = "Linux"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Fix Extra Brace in Data Collection Endpoint Name
The name attribute on line 2 contains an extra closing curly bracket (}}). This may lead to unexpected naming issues.
Suggested fix:

-  name                = "${azurerm_kubernetes_cluster.aks.name}}-mdce"
+  name                = "${azurerm_kubernetes_cluster.aks.name}-mdce"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_monitor_data_collection_endpoint" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}}-mdce"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
kind = "Linux"
}
resource "azurerm_monitor_data_collection_endpoint" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}-mdce"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
kind = "Linux"
}

Comment on lines 41 to 49
resource "azurerm_monitor_data_collection_rule_association" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.aks.id
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster."
depends_on = [
azurerm_monitor_data_collection_rule.aks
]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Resolve Extra Brace in Data Collection Rule Association Name
The name on line 42 also contains an extra }. Correcting this is necessary for predictable resource naming.
Suggested fix:

-  name                    = "${azurerm_kubernetes_cluster.aks.name}}-mdcra"
+  name                    = "${azurerm_kubernetes_cluster.aks.name}-mdcra"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_monitor_data_collection_rule_association" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.aks.id
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster."
depends_on = [
azurerm_monitor_data_collection_rule.aks
]
}
resource "azurerm_monitor_data_collection_rule_association" "aks" {
- name = "${azurerm_kubernetes_cluster.aks.name}}-mdcra"
+ name = "${azurerm_kubernetes_cluster.aks.name}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.aks.id
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster."
depends_on = [
azurerm_monitor_data_collection_rule.aks
]
}

Comment on lines 8 to 39
resource "azurerm_monitor_data_collection_rule" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcr"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.aks.id
kind = "Linux"

destinations {
monitor_account {
monitor_account_id = azurerm_monitor_workspace.aks.id
name = azurerm_monitor_workspace.aks.name
}
}

data_flow {
streams = ["Microsoft-PrometheusMetrics"]
destinations = ["${azurerm_monitor_workspace.aks.name}"]
}


data_sources {
prometheus_forwarder {
streams = ["Microsoft-PrometheusMetrics"]
name = "PrometheusDataSource"
}
}

description = "DCR for Azure Monitor Metrics Profile (Managed Prometheus)"
depends_on = [
azurerm_monitor_data_collection_endpoint.aks
]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Correct Name Interpolation in Data Collection Rule
Similarly, the name attribute on line 9 for the data collection rule resource includes an extraneous }. Please remove the extra brace to ensure the name is interpolated correctly.
Suggested fix:

-  name                        = "${azurerm_kubernetes_cluster.aks.name}}-mdcr"
+  name                        = "${azurerm_kubernetes_cluster.aks.name}-mdcr"

Additionally, review the destination and data flow blocks; they appear to be configured properly for routing the "Microsoft-PrometheusMetrics" stream to your monitor workspace.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_monitor_data_collection_rule" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}}-mdcr"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.aks.id
kind = "Linux"
destinations {
monitor_account {
monitor_account_id = azurerm_monitor_workspace.aks.id
name = azurerm_monitor_workspace.aks.name
}
}
data_flow {
streams = ["Microsoft-PrometheusMetrics"]
destinations = ["${azurerm_monitor_workspace.aks.name}"]
}
data_sources {
prometheus_forwarder {
streams = ["Microsoft-PrometheusMetrics"]
name = "PrometheusDataSource"
}
}
description = "DCR for Azure Monitor Metrics Profile (Managed Prometheus)"
depends_on = [
azurerm_monitor_data_collection_endpoint.aks
]
}
resource "azurerm_monitor_data_collection_rule" "aks" {
name = "${azurerm_kubernetes_cluster.aks.name}-mdcr"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.aks.id
kind = "Linux"
destinations {
monitor_account {
monitor_account_id = azurerm_monitor_workspace.aks.id
name = azurerm_monitor_workspace.aks.name
}
}
data_flow {
streams = ["Microsoft-PrometheusMetrics"]
destinations = ["${azurerm_monitor_workspace.aks.name}"]
}
data_sources {
prometheus_forwarder {
streams = ["Microsoft-PrometheusMetrics"]
name = "PrometheusDataSource"
}
}
description = "DCR for Azure Monitor Metrics Profile (Managed Prometheus)"
depends_on = [
azurerm_monitor_data_collection_endpoint.aks
]
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
infrastructure/modules/aks/amw.tf (1)

22-26: Nitpick: Simplify destination interpolation.
While valid, the use of interpolation in the destinations block on line 24 is unnecessary in Terraform 0.12+ and can be simplified. For maintainability, consider updating it to:

destinations = [azurerm_monitor_workspace.aks.name]
infrastructure/modules/aks/law.tf (1)

49-55: Approved: Data collection rule association for Container Insights is correctly implemented.
The association resource is defined properly with accurate references to the AKS cluster and the corresponding data collection rule. Consider adding an explicit depends_on if ordering becomes an issue, although the implicit dependency via references is generally sufficient.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c3ee49e and 771bd40.

📒 Files selected for processing (2)
  • infrastructure/modules/aks/amw.tf (1 hunks)
  • infrastructure/modules/aks/law.tf (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
infrastructure/modules/aks/law.tf (1)

1-47: Approved: Resource for Log Analytics data collection rule appears correctly configured.
The resource block uses proper interpolation and JSON encoding. All configuration aspects—from destinations to data flows and data sources—are logically arranged and consistent with best practices.

Comment on lines 1 to 6
resource "azurerm_monitor_data_collection_endpoint" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdce"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
kind = "Linux"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Fix string interpolation in data collection endpoint name.
The name attribute on line 2 has an extra closing curly bracket (}}) which will result in a Terraform syntax error. Please update the interpolation to use a single closing bracket.

-  name                = "${azurerm_monitor_workspace.aks.name}}-mdce"
+  name                = "${azurerm_monitor_workspace.aks.name}-mdce"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_monitor_data_collection_endpoint" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdce"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
kind = "Linux"
}
resource "azurerm_monitor_data_collection_endpoint" "amw" {
name = "${azurerm_monitor_workspace.aks.name}-mdce"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
kind = "Linux"
}

Comment on lines 41 to 48
resource "azurerm_monitor_data_collection_rule_association" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.amw.id
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster."
depends_on = [
azurerm_monitor_data_collection_rule.amw
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Adjust string interpolation in data collection rule association name.
Line 42 repeats the same interpolation error with an extra closing curly bracket. Correcting this will ensure proper resource naming and avoid potential failures during deployment.

-  name                    = "${azurerm_monitor_workspace.aks.name}}-mdcra"
+  name                    = "${azurerm_monitor_workspace.aks.name}-mdcra"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_monitor_data_collection_rule_association" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.amw.id
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster."
depends_on = [
azurerm_monitor_data_collection_rule.amw
]
resource "azurerm_monitor_data_collection_rule_association" "amw" {
name = "${azurerm_monitor_workspace.aks.name}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.amw.id
description = "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster."
depends_on = [
azurerm_monitor_data_collection_rule.amw
]
}

Comment on lines 8 to 14
resource "azurerm_monitor_data_collection_rule" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdcr"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.amw.id
kind = "Linux"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Correct string interpolation in monitor data collection rule name.
Line 9 similarly contains an extra closing curly bracket (}}) in the name attribute. Consistent naming is crucial for resource identification, so please fix it as shown below.

-  name                        = "${azurerm_monitor_workspace.aks.name}}-mdcr"
+  name                        = "${azurerm_monitor_workspace.aks.name}-mdcr"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "azurerm_monitor_data_collection_rule" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdcr"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.amw.id
kind = "Linux"
resource "azurerm_monitor_data_collection_rule" "amw" {
- name = "${azurerm_monitor_workspace.aks.name}}-mdcr"
+ name = "${azurerm_monitor_workspace.aks.name}-mdcr"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
data_collection_endpoint_id = azurerm_monitor_data_collection_endpoint.amw.id
kind = "Linux"
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 771bd40 and 899ed85.

📒 Files selected for processing (2)
  • infrastructure/modules/aks/amw-collection.tf (1 hunks)
  • infrastructure/modules/aks/law-collection.tf (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Analyze (go)
🔇 Additional comments (4)
infrastructure/modules/aks/law-collection.tf (4)

1-11: Good: Clear and Consistent Naming in Log Analytics Data Collection Rule.

The resource "azurerm_monitor_data_collection_rule" for Log Analytics uses proper interpolation:

name = "${azurerm_log_analytics_workspace.aks.name}-mdcr"

and clear references to the Log Analytics workspace. The configuration is clean and consistent.


23-44: Good: Robust Extension Configuration for Container Insights.

The data_sources block, including both the syslog configuration and the ContainerInsights extension (with jsonencode), is well structured and clear. This design makes it easier to adjust settings (such as the collection interval or namespace filters) in the future.


49-55: Approved: Association Resource is Configured Correctly.

The "azurerm_monitor_data_collection_rule_association" resource for Log Analytics is defined using clear interpolation and appropriate resource references:

name = "${azurerm_log_analytics_workspace.aks.name}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.law.id

No issues detected.


13-22: Observation: Verify Data Flow Destination Reference.

Within the first data_flow block, the destination is specified as:

destinations = ["${azurerm_log_analytics_workspace.aks.name}"]

Ensure that this string interpolation correctly matches the expected destination identifier for your setup. It appears intentional, but please confirm it aligns with your overall monitoring configuration.

✅ Verification successful

Data Flow Destination Reference Verified

After reviewing the Terraform snippet, the string interpolation

destinations = ["${azurerm_log_analytics_workspace.aks.name}"]

correctly references the Log Analytics Workspace name, aligning with the intended monitoring configuration.

  • The interpolation syntax is valid for Terraform.
  • There is no mismatch or misreference identified in this context.
🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found

Comment on lines 1 to 6
resource "azurerm_monitor_data_collection_endpoint" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdce"
resource_group_name = azurerm_resource_group.aks.name
location = azurerm_resource_group.aks.location
kind = "Linux"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Critical: Fix Extra Curly Braces in Interpolation for Data Collection Endpoint Name.

On line 2 the name is declared as:

name = "${azurerm_monitor_workspace.aks.name}}-mdce"

There is an extra closing brace. This will cause a syntax error during Terraform plan. Please update it to:

name = "${azurerm_monitor_workspace.aks.name}-mdce"

Comment on lines 8 to 10
resource "azurerm_monitor_data_collection_rule" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdcr"
resource_group_name = azurerm_resource_group.aks.name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Critical: Correct Interpolation Syntax in Data Collection Rule Name.

The resource "azurerm_monitor_data_collection_rule" on line 9 uses the pattern:

name = "${azurerm_monitor_workspace.aks.name}}-mdcr"

The extra closing brace (}}) should be removed to avoid errors. Revise this line to:

name = "${azurerm_monitor_workspace.aks.name}-mdcr"

Comment on lines 41 to 43
resource "azurerm_monitor_data_collection_rule_association" "amw" {
name = "${azurerm_monitor_workspace.aks.name}}-mdcra"
target_resource_id = azurerm_kubernetes_cluster.aks.id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Critical: Fix Interpolation in Association Resource Name.

In the "azurerm_monitor_data_collection_rule_association" resource (line 42), the name is defined as:

name = "${azurerm_monitor_workspace.aks.name}}-mdcra"

Remove the extra curly brace so it becomes:

name = "${azurerm_monitor_workspace.aks.name}-mdcra"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant