Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform 1.10 causes "Error: Unsupported argument" on module source that worked with terraform 1.9 #36132

Closed
jamiekt opened this issue Nov 28, 2024 · 7 comments
Labels
bug explained a Terraform Core team member has described the root cause of this issue in code

Comments

@jamiekt
Copy link

jamiekt commented Nov 28, 2024

Terraform Version

1.10

Terraform Configuration Files

I haven't been able to build an easily distributable repo. The problem is only occurring in our private repos. We have multiple occurrences of it happening though and they're all referring to the same module source so I'm providing the source here in case it helps. Note that the problem occurs when running terraform validate as well as terraform plan

A module that uses the module source is:

module "product-authoring-private-topic" {
  source = "github.com/our-org/kafka-automation?depth=1//terraform-modules/private-topic"

  name                      = "product-authoring.product.v1beta1"
  partitions                = 3
  replication_factor        = 2
  cleanup_policy            = "delete"
  retention_ms              = 1209600000 # 14 days
  compression_type          = "snappy"
  min_cleanable_dirty_ratio = 0.5
  min_insync_replicas       = 2
}

and the module source (which is where we think the problem is occurring) is over the following 4 files:

main.tf:

resource "kafka_topic" "this" {
  name               = "${data.external.github_repo.result.name}.${var.name}"
  replication_factor = var.replication_factor
  partitions         = var.partitions

  config = {
    "cleanup.policy"                          = "${var.cleanup_policy}"
    "compression.type"                        = "${var.compression_type}"
    "delete.retention.ms"                     = "${var.delete_retention_ms}"
    "file.delete.delay.ms"                    = "${var.file_delete_delay_ms}"
    "flush.messages"                          = "${var.flush_messages}"
    "flush.ms"                                = "${var.flush_ms}"
    "follower.replication.throttled.replicas" = "${var.follower_replication_throttled_replicas}"
    "index.interval.bytes"                    = "${var.index_interval_bytes}"
    "leader.replication.throttled.replicas"   = "${var.leader_replication_throttled_replicas}"
    "max.compaction.lag.ms"                   = "${var.max_compaction_lag_ms}"
    "max.message.bytes"                       = "${var.max_message_bytes}"
    "message.timestamp.difference.max.ms"     = "${var.message_timestamp_difference_max_ms}"
    "message.timestamp.type"                  = "${var.message_timestamp_type}"
    "min.cleanable.dirty.ratio"               = "${var.min_cleanable_dirty_ratio}"
    "min.compaction.lag.ms"                   = "${var.min_compaction_lag_ms}"
    "min.insync.replicas"                     = "${var.min_insync_replicas}"
    "preallocate"                             = "${var.preallocate}"
    "retention.bytes"                         = "${var.retention_bytes}"
    "retention.ms"                            = "${var.retention_ms}"
    "segment.bytes"                           = "${var.segment_bytes}"
    "segment.index.bytes"                     = "${var.segment_index_bytes}"
    "segment.jitter.ms"                       = "${var.segment_jitter_ms}"
    "segment.ms"                              = "${var.segment_ms}"
    "unclean.leader.election.enable"          = "${var.unclean_leader_election_enable}"
    "message.downconversion.enable"           = "${var.message_downconversion_enable}"
  }
}

data "external" "github_repo" {
  program = ["bash", "${path.module}/external/get-github-repo.sh"]
}

outputs.tf:

output "name" {
  description = "The final name for the topic"
  value       = kafka_topic.this.name
}

terraform.tf:

terraform {
  required_providers {
    kafka = {
      source  = "Mongey/kafka"
      version = "~> 0.5.2"
    }
  }
}

vars.tf:

variable "name" {
  type        = string
  description = "The name of the topic. The final topic name would be prepended with the GitHub repository name"
}

variable "partitions" {
  type        = number
  description = "The number of partitions for the topic"
}

variable "replication_factor" {
  type        = number
  description = "The number of replicas for the topic"
  validation {
    condition     = var.replication_factor >= 2
    error_message = "Replication factor should not be less than 2."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_cleanup.policy
variable "cleanup_policy" {
  type        = string
  description = "This config designates the retention policy to use on log segments"
  default     = "delete"
  validation {
    condition     = contains(["compact", "delete", "compact,delete"], var.cleanup_policy)
    error_message = "Cleanup policy should either be 'compact', 'delete' or 'compact,delete'."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_compression.type
variable "compression_type" {
  type        = string
  description = "Specify the final compression type for a given topic"
  default     = "producer"
  validation {
    condition     = contains(["uncompressed", "zstd", "lz4", "snappy", "gzip", "producer"], var.compression_type)
    error_message = "Compression type should be one of: uncompressed, zstd, lz4, snappy, gzip, producer."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_delete.retention.ms
variable "delete_retention_ms" {
  type        = number
  description = "The amount of time to retain delete tombstone markers for log compacted topics"
  # 1 day
  default = 86400000
  validation {
    condition     = var.delete_retention_ms >= 0
    error_message = "Value of delete_retention_ms should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_file.delete.delay.ms
variable "file_delete_delay_ms" {
  type        = number
  description = "The time to wait before deleting a file from the filesystem"
  # 1 minute
  default = 60000
  validation {
    condition     = var.file_delete_delay_ms >= 0
    error_message = "Value of file_delete_delay_ms should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_flush.messages
variable "flush_messages" {
  type        = number
  description = "This setting allows specifying an interval at which we will force an fsync of data written to the log"
  default     = 9223372036854775807
  validation {
    condition     = var.flush_messages >= 1
    error_message = "Value of flush_messages should not be less than 1."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_flush.ms
variable "flush_ms" {
  type        = number
  description = "This setting allows specifying a time interval at which we will force an fsync of data written to the log"
  default     = 9223372036854775807
  validation {
    condition     = var.flush_ms >= 0
    error_message = "Value of flush_ms should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_follower.replication.throttled.replicas
variable "follower_replication_throttled_replicas" {
  type        = string
  description = "A list of replicas for which log replication should be throttled on the follower side"
  default     = ""
}

# https://kafka.apache.org/documentation/#topicconfigs_index.interval.bytes
variable "index_interval_bytes" {
  type        = number
  description = "This setting controls how frequently Kafka adds an index entry to its offset index"
  default     = 4096
  validation {
    condition     = var.index_interval_bytes >= 0
    error_message = "Value of index_interval_bytes should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_leader.replication.throttled.replicas
variable "leader_replication_throttled_replicas" {
  type        = string
  description = "A list of replicas for which log replication should be throttled on the leader side"
  default     = ""
}

# https://kafka.apache.org/documentation/#topicconfigs_max.compaction.lag.ms
variable "max_compaction_lag_ms" {
  type        = number
  description = "The maximum time a message will remain ineligible for compaction in the log"
  default     = 9223372036854775807
  validation {
    condition     = var.max_compaction_lag_ms >= 1
    error_message = "Value of max_compaction_lag_ms should not be less than 1."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_max.message.bytes
variable "max_message_bytes" {
  type        = number
  description = "The largest record batch size allowed by Kafka (after compression if compression is enabled)"
  default     = 1048588
  validation {
    condition     = var.max_message_bytes >= 0
    error_message = "Value of max_message_bytes should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_message.timestamp.difference.max.ms
variable "message_timestamp_difference_max_ms" {
  type        = number
  description = "The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message"
  default     = 9223372036854775807
  validation {
    condition     = var.message_timestamp_difference_max_ms >= 0
    error_message = "Value of message_timestamp_difference_max_ms should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_message.timestamp.type
variable "message_timestamp_type" {
  type        = string
  description = "Define whether the timestamp in the message is message create time or log append time"
  default     = "CreateTime"
  validation {
    condition     = contains(["CreateTime", "LogAppendTime"], var.message_timestamp_type)
    error_message = "Message timestamp type should be CreateTime or LogAppendTime."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_min.cleanable.dirty.ratio
variable "min_cleanable_dirty_ratio" {
  type        = number
  description = "This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled)"
  default     = 0.5
  validation {
    condition     = var.min_cleanable_dirty_ratio >= 0 && var.min_cleanable_dirty_ratio <= 1
    error_message = "Value of min_cleanable_dirty_ratio should be between 0 and 1, 0 and 1 inclusive."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_min.compaction.lag.ms
variable "min_compaction_lag_ms" {
  type        = number
  description = "The minimum time a message will remain uncompacted in the log"
  default     = 0
  validation {
    condition     = var.min_compaction_lag_ms >= 0
    error_message = "Value of min_compaction_lag_ms should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_min.insync.replicas
variable "min_insync_replicas" {
  type        = number
  description = "When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful"
  default     = 1
  validation {
    condition     = var.min_insync_replicas >= 1
    error_message = "Value of min_insync_replicas should not be less than 1."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_preallocate
variable "preallocate" {
  type        = bool
  description = "True if we should preallocate the file on disk when creating a new log segment"
  default     = false
}

# https://kafka.apache.org/documentation/#topicconfigs_retention.bytes
variable "retention_bytes" {
  type        = number
  description = "This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy"
  default     = -1
}

# https://kafka.apache.org/documentation/#topicconfigs_retention.ms
variable "retention_ms" {
  type        = number
  description = "This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the 'delete' retention policy"
  # 7 days
  default = 604800000
  validation {
    condition     = var.retention_ms >= -1
    error_message = "Value of retention_ms should not be less than -1."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_segment.bytes
variable "segment_bytes" {
  type        = number
  description = "This configuration controls the segment file size for the log"
  # 1 gibibyte
  default = 1073741824
  validation {
    condition     = var.segment_bytes >= 14
    error_message = "Value of segment_bytes should not be less than 14."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_segment.index.bytes
variable "segment_index_bytes" {
  type        = number
  description = "This configuration controls the size of the index that maps offsets to file positions"
  # 10 mebibytes
  default = 10485760
  validation {
    condition     = var.segment_index_bytes >= 4
    error_message = "Value of segment_index_bytes should not be less than 4."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_segment.jitter.ms
variable "segment_jitter_ms" {
  type        = number
  description = "The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling"
  default     = 0
  validation {
    condition     = var.segment_jitter_ms >= 0
    error_message = "Value of segment_jitter_ms should not be less than 0."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_segment.ms
variable "segment_ms" {
  type        = number
  description = "This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data."
  # 7 days
  default = 604800000
  validation {
    condition     = var.segment_ms >= 1
    error_message = "Value of segment_ms should not be less than 1."
  }
}

# https://kafka.apache.org/documentation/#topicconfigs_unclean.leader.election.enable
variable "unclean_leader_election_enable" {
  type        = bool
  description = "Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss"
  default     = false
}

# https://kafka.apache.org/documentation/#topicconfigs_message.downconversion.enable
variable "message_downconversion_enable" {
  type        = bool
  description = "This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests"
  default     = true
}

Debug Output

https://gist.github.com/jamiekt/f8e9c59ce3605e72446b238e1cb49e82

Expected Behavior

When using terraform 1.9 terraform validate ran successfully. We expect the same to happen with terraform 1.10

Actual Behavior

When using terraform 1.10 the same command fails with:

╷
│ Error: Unsupported argument
│ 
│   on main.tf line 36, in module "product-authoring-private-topic":
│   36:   name                      = "product-authoring.product.v1beta1"
│ 
│ An argument named "name" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on main.tf line 37, in module "product-authoring-private-topic":
│   37:   partitions                = 3
│ 
│ An argument named "partitions" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on main.tf line 38, in module "product-authoring-private-topic":
│   38:   replication_factor        = 2
│ 
│ An argument named "replication_factor" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on main.tf line 39, in module "product-authoring-private-topic":
│   39:   cleanup_policy            = "delete"
│ 
│ An argument named "cleanup_policy" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on main.tf line 40, in module "product-authoring-private-topic":
│   40:   retention_ms              = 1209600000 # 14 days
│ 
│ An argument named "retention_ms" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on main.tf line 41, in module "product-authoring-private-topic":
│   41:   compression_type          = "snappy"
│ 
│ An argument named "compression_type" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on main.tf line 42, in module "product-authoring-private-topic":
│   42:   min_cleanable_dirty_ratio = 0.5
│ 
│ An argument named "min_cleanable_dirty_ratio" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on main.tf line 43, in module "product-authoring-private-topic":
│   43:   min_insync_replicas       = 2
│ 
│ An argument named "min_insync_replicas" is not expected here.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.

Steps to Reproduce

  1. terraform init
  2. terraform validate

Additional Context

The error occurs when running in a GitHub Actions workflow

References

No response

@jamiekt jamiekt added bug new new issue not yet triaged labels Nov 28, 2024
@jamiekt
Copy link
Author

jamiekt commented Nov 28, 2024

This information from the trace log may be of interest

2024-11-28T15:33:22.270Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-11-28T15:33:22.280Z [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/aws/5.78.0/linux_amd64/terraform-provider-aws_v5.78.0_x5 id=1275

2024-11-28T15:33:22.347Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-11-28T15:33:22.349Z [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/vault/4.5.0/linux_amd64/terraform-provider-vault_v4.5.0_x5 id=1288

2024-11-28T15:33:22.396Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-11-28T15:33:22.397Z [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/mongey/kafka/0.5.2/linux_amd64/terraform-provider-kafka_v0.5.2 id=1300

@jamiekt jamiekt changed the title terraform 1.10 causes "Error: Unsupported argument" terraform 1.10 causes "Error: Unsupported argument" on module source that worked with terraform 1.9 Nov 28, 2024
@radeksimko
Copy link
Member

Hi @jamiekt
I was unfortunately unable to reproduce the error with 1.10.0.

Running 1.10.0 terraform init and terraform validate returns the following:

Success! The configuration is valid.

Can you share the output from terraform version, just to double check you are using the most recent (final) 1.10.0 version, rather than a 1.10 pre-release?

Thanks.

@radeksimko radeksimko added waiting-response An issue/pull request is waiting for a response from the community and removed new new issue not yet triaged labels Nov 28, 2024
@jamiekt
Copy link
Author

jamiekt commented Nov 28, 2024

Hi @radeksimko ,
We're running this inside GitHub Actions and we use custom action https://github.com/hashicorp/setup-terraform to install the desired version of terraform. We specify the version as "1.x" and if we run the job with enable debug logging we can see from the debug output:

Run hashicorp/setup-terraform@v3
##[debug]Finding releases for Terraform version 1.x
##[debug]Getting build for Terraform version 1.10.0: linux amd64
##[debug]Downloading Terraform CLI from https://releases.hashicorp.com/terraform/1.10.0/terraform_1.10.0_linux_amd64.zip
##[debug]Downloading https://releases.hashicorp.com/terraform/1.10.0/terraform_1.10.0_linux_amd64.zip
##[debug]Destination /runner/_work/_temp/31abb0db-6af6-47f6-9cd3-12bf46316a82
##[debug]download complete

image

that its using 1.10.

@jamiekt
Copy link
Author

jamiekt commented Nov 28, 2024

I noticed these lines in the TF_LOG trace output which may or may not be significant

2024-11-28T15:33:22.270Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-11-28T15:33:22.280Z [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/aws/5.78.0/linux_amd64/terraform-provider-aws_v5.78.0_x5 id=1275

2024-11-28T15:33:22.347Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-11-28T15:33:22.349Z [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/vault/4.5.0/linux_amd64/terraform-provider-vault_v4.5.0_x5 id=1288

2024-11-28T15:33:22.396Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-11-28T15:33:22.397Z [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/mongey/kafka/0.5.2/linux_amd64/terraform-provider-kafka_v0.5.2 id=1300

@jbardin
Copy link
Member

jbardin commented Nov 30, 2024

Hi @jamiekt,

Though it worked for some releases, the source argument for your module is not doing what you think it is. It seems you formatted it via trial and error, and accidentally ended up with an incorrectly supported string. Your source URL has a query parameter of
?depth=1//terraform-modules/private-topic, but some versions of Terraform were incorrectly splitting off the //... portion as a subdirectory, breaking some previously supported query patterns.

The subdirectory portion of the module sources documentation shows the correct format, where the subdirectory is part of the path, preventing it from being interpreted in the query parameters. You should be able to use the following to correctly get your module:

"github.com/our-org/kafka-automation//terraform-modules/private-topic?depth=1"

@jbardin jbardin closed this as completed Nov 30, 2024
@jamiekt
Copy link
Author

jamiekt commented Dec 2, 2024

Thank you @jbardin , you're spot on of course. I've found 30 instances of this faulty code across our org so there have been quite a few failures.

@crw crw added explained a Terraform Core team member has described the root cause of this issue in code and removed waiting-response An issue/pull request is waiting for a response from the community labels Dec 2, 2024
Copy link
Contributor

github-actions bot commented Jan 2, 2025

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 2, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug explained a Terraform Core team member has described the root cause of this issue in code
Projects
None yet
Development

No branches or pull requests

4 participants