You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
terraform fatally errors saying the plan is inconsistent
To Reproduce
Steps to reproduce the behavior:
Create a container
create a terraform data that's triggers replace is the container
See error
Please also provide a minimal Terraform configuration that reproduces the issue.
resource"proxmox_virtual_environment_container""proxmox_ct" {
description=var.container.descriptionstarted=truenode_name=local.nodevm_id=local.vm_idunprivileged=var.container.unprivilegedinitialization {
hostname=var.container.hostnameip_config {
ipv4 {
address=var.container.ipv4_addressgateway=var.container.ipv4_gateway!=null? var.container.ipv4_gateway:null
}
}
dns {
servers=var.container.dns
}
user_account {
keys=[
local.public_key
]
}
}
cpu {
cores=var.container.cores
}
disk {
datastore_id=local.datastore_idsize=var.container.disk
}
memory {
dedicated=var.container.memoryswap=var.container.swap
}
network_interface {
name="veth0"
}
operating_system {
template_file_id=var.container.os_image# Or you can use a volume ID, as obtained from a "pvesm list <storage>"# template_file_id = "local:vztmpl/jammy-server-cloudimg-amd64.tar.gz"type=var.container.os_type
}
start_on_boot=var.container.startupmount_point {
volume="/mnt/bindmounts/terraform"path="/terraform"shared="true"
}
}
resource"terraform_data""provision" {
triggers_replace=[proxmox_virtual_environment_container.proxmox_ct, ]
connection {
host=var.pve_settings.pve_addresstype="ssh"user=local.pve_userpassword=var.pve_settings.pve_password
}
provisioner"file" {
source="${path.module}/enable_ssh.sh"destination="/mnt/bindmounts/terraform/enable_ssh.sh"
}
provisioner"remote-exec" {
inline=[
"pct exec ${local.vm_id} bash /terraform/enable_ssh.sh"# vmid goes in space
]
}
}
and the output of terraform|tofu apply.
Expected behavior
plan completes and whena container is changed destroyed or made the terraform data is triggered
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
i used to use depends on instead of triggers replace but during tofu apply i was running into issues where it was being skipped to swapped it to triggers replace and it gave the error
Single or clustered Proxmox: 2 node cluster(ik)
Proxmox version: 8.2.2
Provider version (ideally it should be the latest version): 0.69
Terraform/OpenTofu version:OpenTofu v1.8.8
on linux_amd64
Describe the bug
terraform fatally errors saying the plan is inconsistent
To Reproduce
Steps to reproduce the behavior:
Please also provide a minimal Terraform configuration that reproduces the issue.
and the output of
terraform|tofu apply
.Expected behavior
plan completes and whena container is changed destroyed or made the terraform data is triggered
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
i used to use depends on instead of triggers replace but during tofu apply i was running into issues where it was being skipped to swapped it to triggers replace and it gave the error
on linux_amd64
TF_LOG=DEBUG terraform apply
):out.txt
ps can you add a warning to the bug report that tflog debug does show your pve password a\nd to remind people to crtl f and remove it
The text was updated successfully, but these errors were encountered: