If you arrived here, I presumably believe that you already know Nostr
. If you haven’t heard of it, Nostr stands for “Notes and Other Stuff Transmitted by Relays” and is an open protocol for censorship-resistant global networks created by @fiatjaf
. Like HTTP or TCP-IP, Nostr is a protocol, an open standard upon which anyone can build. Each Nostr account is based on a public/private key pair. A simple way to think about this is that your public key is your username and your private key is your password. After you created your key pair and registered with a client, you might want your username (the public key) checked and verified showing the Nostr community that you’re a real user, just like on Twitter. For example, check out mine here
on nostr.directory/
:
The verification process on Nostr is documented in a Nostr Implementation Possibilities (NIP) called NIP-05
. NIP-05 enables a Nostr user to map their public key to a DNS-based internet identifier. Read more details here
.
In this article, I will show you my experiment of how one could use AWS CloudFront, API Gateway, Lambda and DynamoDB to build a NIP-05 identity service, then use Terraform to manage and deploy the whole service stack. If you would like to use other free or paid services, see here
or here
. BTW, I didn’t use this way to get my public key verified, I used a static nostr.json file served by my blog website. Read my hugo blogs
series to find out how. Below diagram shows the high level flow of the AWS services:
HashiCorp Terraform
is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.
First, if you don’t have an AWS account, then you can follow this instruction
to set up an AWS account and create an administrator user. There are 2 options to create the user, IAM or IAM Identity Centre. I chose IAM Identity Centre. Note, this is different to the root user during your AWS account sign up. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to an administrative user, and use only the root user to perform tasks that require root user access.
Then, follow this guide
to install AWS CLI. The AWS Command Line Interface (AWS CLI) is an open source tool that enables you to interact with AWS services using commands in your command-line shell.
Next, either install Terraform directly following this guide
or use tfenv
to install the terraform. My terraform version is v1.3.6 .
For this experiment, I just simply use terraform localbackend
. A backend defines where Terraform stores its state data files. By default, Terraform uses a backend called local, which stores state as a local file on disk. You can also setup an AWS S3 backend if you prefer.
In line 9 and line 16, I exported 2 functions, one is to be used for the API Gateway authorizer and one for the main handler to lookup the public keys in DynamoDB.
In line 10, the authorizer will check if a special header x-origin-verify exists and equal to the configured secret token to make sure only requests coming from AWS CloudFront is permitted and block other requests coming directly from the API Gateway. Read this
AWS article for the idea behind.
📃 dynamodb.ts:
Line 15 defines the DynamoDB table name, which can be found in the Terraform configure file.
7. For Terraform configure files, let’s look at them one by one.
📃 main.tf:
In line 19, I’m using my pre-defined AWS profile, please replace with your own one.
In line 33, the DynamoDB table is defined, should be the same as used in the dynamodb.ts file.
Line 91-104, it executes an external command line call to run npm to compile and copy all the necessary files and generate a zip file. My package.json looks like below:
1
2
3
4
5
6
7
8
9
10
11
12
"scripts":{"clean":"rimraf dist && rimraf package","mkdirs":"mkdir dist && mkdir package","copy:js":"cp dist/*.js* package/","copy:node-modules":"cp -r node_modules package/","copy":"npm run copy:js && npm run copy:node-modules","compile":"tsc","reinstall":"rimraf node_modules && npm install","build":"npm run compile && npm run copy","rebuild":"npm run reinstall && npm run build","prezip":"npm run clean && npm run mkdirs && npm run rebuild"}
All the final js files will be copied to package folder before zipping.
📃 apigateway.tf
In line 31, the authorizer will check if $request.header.x-origin-verify header exists or not, if not, it returns a 401 Unauthorized response without calling the Lambda function.
In line 37, we specify the API route key. Comment out this line, and comment in line 36 if you want to support any route.
📃 cloudfront.tf
# Cloudfront
resource"aws_cloudfront_distribution""api-cf"{origin{domain_name = replace(aws_apigatewayv2_stage.api.invoke_url,"/^https?://([^/]*).*/","$1")origin_id = "apigw"origin_path = "/${random_id.random_path.hex}"custom_header{name = "x-origin-verify"value = random_string.random_token.result}custom_origin_config{https_port = 443http_port = 80origin_protocol_policy = "https-only"origin_ssl_protocols = ["TLSv1.2"]}}enabled = trueis_ipv6_enabled = truewait_for_deployment = falsedefault_cache_behavior{allowed_methods = ["GET","HEAD"]cached_methods = ["GET","HEAD"]target_origin_id = "apigw"forwarded_values{query_string = truecookies{forward = "all"}}viewer_protocol_policy = "redirect-to-https"default_ttl = 0min_ttl = 0max_ttl = 0function_association{event_type = "viewer-response"function_arn = aws_cloudfront_function.viewer_response.arn}}restrictions{geo_restriction{restriction_type = "none"}}viewer_certificate{cloudfront_default_certificate = true}}resource"aws_cloudfront_function""viewer_response"{name = "nostr-nip05-viewer-response"runtime = "cloudfront-js-1.0"publish = truecode = <<EOTfunctionhandler(event){varresponse = event.response;varheaders = response.headers; // If Access-Control-Allow-Origin CORS header is missing, add it.
// Since JavaScript doesn't allow for hyphens in variable names, we use the dict["key"] notation.
if(!headers['access-control-allow-origin']){headers['access-control-allow-origin']={value:"*"};console.log("Access-Control-Allow-Origin was missing, adding it now.");}returnresponse;}EOT}
Line 9-12, we use a generated random token as the secret token for the x-origin-verify header.
Line 40-43, we use an edge function to add the access-control-allow-origin header to the response. See this
for the explanation.
📃 outputs.tf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Output value definitions
output"lambda_name"{description = "Name of the Lambda function."value = aws_lambda_function.app_lambda.function_name}output"authorizer_name"{description = "Name of the Authorizer function."value = aws_lambda_function.auth_lambda.function_name}output"gateway_url"{description = "Base URL for API Gateway stage."value = aws_apigatewayv2_stage.api.invoke_url}output"cloudfront_domain"{value = aws_cloudfront_distribution.api-cf.domain_name}
Finally, go to terraform folder and execute terraform init command. If the init command completed successfully, then you can execute terraform plan to check your settings and terraform apply to build and deploy your service stack onto AWS. Outputs will look like:
From the outputs, copy the cloudfront domain, then run curl https://<your-cloudfront-domain>/.well-known/nostr.json?name=<your username>. Deploying CloudFront takes some time, if you get an error Could not resolve host: <your-cloudfront-domain>, then go to AWS console and check your CloudFront Distributions, see if the newly deployed cloudfront status is Enabled. If enabled, try the curl command again, then you should get {"names":{}} back.
Manually add an item to the AWS DynamoDB using AWS console:
Use the JSON view format to add a test item:
Then you should see {"names":{"<your username>":"<your pubkey>"}} as the curl command response.
That’s it 🏁 The 2 things left to be done is to add an AWS Route53 domain associated with the CloudFront, and a Lambda function to add a pubkey with a user to the DynamoDB table. Will leave these to the readers to figure them out…😏