Introduction
Following our strategy discussion on defining a Resource Description Framework (RDF), this post walks through a hands-on example showing how a Python-driven RDF engine can generate Terraform and Bicep configuration files for a simple Azure Storage Account.
The goal is to illustrate the conceptual workflow, not to deploy production-ready infrastructure.
Step 1: Define Module Metadata in JSON-LD
We store metadata about our modules in JSON-LD, which captures intent and configuration details in a machine-readable way:
{
"@context": {
"az": "https://example.com/azure#"
},
"@id": "urn:example:storage-account:uksouth:demo",
"@type": "az:StorageAccount",
"az:name": "strdfstorage",
"az:sku": {
"az:tier": "Standard"
},
"az:accessTier": "Cool",
"az:accountReplication": "LRS",
}This metadata could later scale to hundreds of modules while maintaining standardised structure and naming.
Step 2: Generate IaC with Python
Here’s an extract from a minimal Python script that reads the RDF metadata and creates ‘storage_account.auto.tfvars’ and ‘main.bicepparam’ files:
def extract_storage_account(defn: dict) -> dict:
sa = {}
sku = defn.get('sku') or {}
sa['sku_tier'] = sku.get('tier')
sa['access_tier'] = defn.get('accessTier')
sa['accountReplication'] = defn.get('accountReplication')
sa['blobSoftDeleteRetentionDays'] = defn.get('blobSoftDeleteRetentionDays')
# Require at least sku_tier and accountReplication
required = [k for k in ('sku_tier', 'accountReplication') if not sa.get(k)]
if required:
raise ValueError(f"Missing required properties in definition: {required}")
return saIn this extract, we normalise the storage account definition from RDF into a simple Python dict, ready to be turned into Infrastructure as Code.
In practice, the full RDF engine handles multiple resources, writes Terraform ‘.auto.tfvars’ and Bicep ‘.bicepparam’ files, and enforces stricter schema validation.
You can find an example of this foundational implementation on GitHub.
This example produces a ready-to-use Terraform ‘.auto.tfvars’ and Bicep ‘.bicepparam’ files based on centralised metadata, ensuring consistency and discoverability.
Step 3a: Deploy the Terraform Configuration
// Initialise Terraform.
terraform init
// Perform a plan with the RDF-injected variables.
terraform plan -var=subscription_id=<your Azure subscription ID>
// Deploy the infrastructure. Optionally add `--auto-approve` to skip reviewing the plan.
terraform apply -var=subscription_id=<your Azure subscription ID>Step 3b: Deploy the Bicep Configuration
// Deploy Bicep
az deployment sub create --location="uksouth" --template-file=bicep/main.bicep --parameters=bicep/main.bicepparam With two Azure Storage Accounts now deployed through metadata-driven, standardised IaC, you’ve seen how the RDF engine creates consistency across resources. This foundation sets the stage for scaling the same approach to more complex workloads in the next sections.
Conclusion
This example shows how the RDF engine bridges vision and implementation: by defining modules centrally, you can generate Terraform and Bicep code reliably, consistently, and at scale.
In the next article of this series, we’ll look at using SPARQL to query the RDF engine, uncovering the relationships between resources and their dependencies.
You can explore the full implementation, including extended examples and ready-to-use code, on the Quadrivium Cloud GitHub.


