<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Sebastian Hesse</title><description>AWS Serverless Cloud Consultant | Consulting &amp; Training</description><link>https://www.sebastianhesse.de/</link><item><title>Run Custom Build Commands During CDK Synthesis with Code.fromCustomCommand</title><link>https://www.sebastianhesse.de/2026/03/01/cdk-code-from-custom-command/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2026/03/01/cdk-code-from-custom-command/</guid><description>Learn how to use CDK&apos;s Code.fromCustomCommand to run custom build scripts, download artifacts, or use non-standard toolchains like Rust or Go during CDK synthesis.</description><pubDate>Sun, 01 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Have you ever needed to build a Rust or Go Lambda function directly inside your CDK stack? Or download a pre-built artifact from S3 during deployment? CDK&apos;s built-in constructs like &lt;code&gt;NodejsFunction&lt;/code&gt; or &lt;code&gt;Code.fromAsset&lt;/code&gt; don&apos;t always cover non-JavaScript runtimes or custom build pipelines. That&apos;s where &lt;code&gt;Code.fromCustomCommand&lt;/code&gt; comes in.&lt;/p&gt;
&lt;h2&gt;What Is &lt;code&gt;Code.fromCustomCommand&lt;/code&gt;?&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Code.fromCustomCommand&lt;/code&gt; is a flexible escape hatch for Lambda packaging. It lets you run &lt;strong&gt;any shell command&lt;/strong&gt; during CDK synthesis to produce a Lambda deployment artifact:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;lambda.Code.fromCustomCommand(
  path.join(__dirname, &apos;..&apos;, &apos;dist&apos;, &apos;lambda&apos;),
  [&apos;bash&apos;, &apos;scripts/build.sh&apos;, &apos;dist/lambda&apos;],
  { commandOptions: { stdio: &apos;inherit&apos; } }
)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The signature is &lt;code&gt;Code.fromCustomCommand(outputDir, command, options)&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;outputDir&lt;/code&gt;&lt;/strong&gt; — the directory CDK zips and stages to S3 as your Lambda code&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;command&lt;/code&gt;&lt;/strong&gt; — shell command (string or string array) to execute before packaging&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;options&lt;/code&gt;&lt;/strong&gt; — maps directly to Node&apos;s &lt;code&gt;spawnSync&lt;/code&gt; options, so you can control &lt;code&gt;cwd&lt;/code&gt;, &lt;code&gt;env&lt;/code&gt;, &lt;code&gt;shell&lt;/code&gt;, and &lt;code&gt;stdio&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Under the hood, CDK calls Node&apos;s &lt;code&gt;spawnSync&lt;/code&gt; synchronously, then zips &lt;code&gt;outputDir&lt;/code&gt; and stages it as a Lambda asset for upload.&lt;/p&gt;
&lt;h2&gt;When Does It Run?&lt;/h2&gt;
&lt;p&gt;The command executes during CDK synthesis — every time you run &lt;code&gt;cdk synth&lt;/code&gt;, &lt;code&gt;cdk diff&lt;/code&gt;, or &lt;code&gt;cdk deploy&lt;/code&gt;. It also runs during &lt;code&gt;npm test&lt;/code&gt; when your tests synthesize stacks via &lt;code&gt;Template.fromStack()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It does &lt;strong&gt;not&lt;/strong&gt; run during &lt;code&gt;npm run build&lt;/code&gt; (TypeScript compilation only).&lt;/p&gt;
&lt;h2&gt;Primary Use Cases&lt;/h2&gt;
&lt;h3&gt;Custom Build Toolchains&lt;/h3&gt;
&lt;p&gt;If you&apos;re writing Lambda functions in Rust, Go, or any language with a native build step, you can invoke the toolchain directly:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Build a Rust Lambda function
lambda.Code.fromCustomCommand(
  path.join(__dirname, &apos;..&apos;, &apos;target&apos;, &apos;lambda&apos;, &apos;my-function&apos;),
  [&apos;cargo&apos;, &apos;lambda&apos;, &apos;build&apos;, &apos;--release&apos;],
  { commandOptions: { cwd: path.join(__dirname, &apos;..&apos;), stdio: &apos;inherit&apos; } }
)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This keeps your build process inside CDK without needing separate CI steps or pre-built artifacts checked into source control.&lt;/p&gt;
&lt;h3&gt;External Artifact Retrieval&lt;/h3&gt;
&lt;p&gt;Need to download a pre-built binary from S3 or a package registry? Run the download during synthesis:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;lambda.Code.fromCustomCommand(
  path.join(__dirname, &apos;dist&apos;),
  [&apos;aws&apos;, &apos;s3&apos;, &apos;cp&apos;, &apos;s3://my-artifacts/my-function.zip&apos;, &apos;dist/&apos;],
  { commandOptions: { stdio: &apos;inherit&apos; } }
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Synth-Time Side Effects&lt;/h3&gt;
&lt;p&gt;You can even point &lt;code&gt;outputDir&lt;/code&gt; at a stub directory and use the command purely for side effects — generating config files, validating external dependencies, or populating local caches.&lt;/p&gt;
&lt;h2&gt;Critical Behaviors to Know&lt;/h2&gt;
&lt;p&gt;Before reaching for &lt;code&gt;Code.fromCustomCommand&lt;/code&gt;, keep these in mind:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;No built-in caching.&lt;/strong&gt; The command runs on every synthesis, every time. A slow build or network download will slow down every &lt;code&gt;cdk diff&lt;/code&gt; and &lt;code&gt;cdk deploy&lt;/code&gt;. Implement your own up-to-date checks in the script if performance matters.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fatal on failure.&lt;/strong&gt; A non-zero exit code immediately aborts synthesis. Make sure your script exits cleanly on success and fails loudly on real errors.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Blocking.&lt;/strong&gt; Synthesis pauses completely while your command runs. There&apos;s no parallelism here — everything waits.&lt;/p&gt;
&lt;h2&gt;A Practical Pattern&lt;/h2&gt;
&lt;p&gt;Since the command runs on every synthesis, a common pattern is to guard the expensive work with an up-to-date check in the build script:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash
# scripts/build.sh
set -e

OUTPUT_DIR=&quot;$1&quot;

# Skip rebuild if output is already up to date
if [ -d &quot;$OUTPUT_DIR&quot; ] &amp;amp;&amp;amp; [ &quot;$OUTPUT_DIR&quot; -nt &quot;src/&quot; ]; then
  echo &quot;Output up to date, skipping build&quot;
  exit 0
fi

cargo lambda build --release --output-location &quot;$OUTPUT_DIR&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then in your CDK stack:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const code = lambda.Code.fromCustomCommand(
  path.join(__dirname, &apos;..&apos;, &apos;dist&apos;, &apos;my-function&apos;),
  [&apos;bash&apos;, &apos;scripts/build.sh&apos;, &apos;dist/my-function&apos;],
  { commandOptions: { stdio: &apos;inherit&apos; } }
);

new lambda.Function(this, &apos;MyFunction&apos;, {
  runtime: lambda.Runtime.PROVIDED_AL2023,
  handler: &apos;bootstrap&apos;,
  code,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can find a complete working example in the &lt;a href=&quot;https://github.com/sh-cloud-software/cdk-code-custom-command-example&quot;&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;When Not to Use It&lt;/h2&gt;
&lt;p&gt;If you&apos;re writing Node.js Lambda functions, stick with &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda_nodejs.NodejsFunction.html&quot;&gt;&lt;code&gt;NodejsFunction&lt;/code&gt;&lt;/a&gt; — it handles bundling, tree-shaking, and esbuild configuration automatically. For Python, &lt;code&gt;PythonFunction&lt;/code&gt; from &lt;code&gt;@aws-cdk/aws-lambda-python-alpha&lt;/code&gt; covers most scenarios.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Code.fromCustomCommand&lt;/code&gt; shines when you have a toolchain or workflow that CDK&apos;s built-in constructs simply can&apos;t accommodate.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Code.fromCustomCommand&lt;/code&gt; fills an important gap in CDK&apos;s Lambda packaging story. If you need to invoke custom toolchains, pull artifacts from external sources, or run synth-time scripts, it gives you a clean, native integration point without stepping outside of CDK. Just build in your own caching logic to keep synthesis fast.&lt;/p&gt;
&lt;p&gt;For a broader look at your Lambda packaging options in CDK, check out my post on &lt;a href=&quot;/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/&quot;&gt;5 ways to bundle a Lambda function within a CDK construct&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Have you tried it with an unusual runtime or build pipeline? Reach out on &lt;a href=&quot;https://www.linkedin.com/in/sebastian-hesse&quot;&gt;LinkedIn&lt;/a&gt; — I&apos;d love to hear what you&apos;re building.&lt;/p&gt;
</content:encoded></item><item><title>Scale CloudWatch Alarms with Metrics Insights Queries</title><link>https://www.sebastianhesse.de/2026/02/24/scale-cloudwatch-alarms-with-metrics-insights/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2026/02/24/scale-cloudwatch-alarms-with-metrics-insights/</guid><description>Use CloudWatch Metrics Insights to monitor multiple resources by querying their tags.</description><pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Have you ever hit CloudFormation&apos;s 500-resource limit while trying to properly monitor your Lambda functions? If you&apos;re managing a large serverless application with comprehensive monitoring, this constraint can sneak up on you fast. Let me show you an elegant solution using CloudWatch Metrics Insights that reduces hundreds of alarm resources down to just a few.&lt;/p&gt;
&lt;h2&gt;The Resource Explosion Problem&lt;/h2&gt;
&lt;p&gt;Traditional Lambda monitoring is straightforward but resource-hungry. For each function, you typically create separate alarms for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Error rate monitoring&lt;/li&gt;
&lt;li&gt;Throttling detection&lt;/li&gt;
&lt;li&gt;Duration warnings&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For 100 Lambda functions with 3 alarms each, you&apos;ve already consumed 300 CloudFormation resources just for monitoring! Add in the actual Lambda Functions, IAM roles, policies, API Gateway resources, and other infrastructure components, and you&apos;ll quickly hit that 500-resource ceiling.&lt;/p&gt;
&lt;p&gt;Sure, you could split your CloudFormation stack into multiple nested stacks or create a separate Lambda Function to automatically manage alarms for all your functions.
But that adds complexity and makes your infrastructure harder to manage. What if there was a better way?&lt;/p&gt;
&lt;h2&gt;CloudWatch Metrics Insights: SQL for Your Metrics&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://aws.amazon.com/about-aws/whats-new/2025/09/amazon-cloudwatch-alarm-multiple-metrics/&quot;&gt;CloudWatch Metrics Insights&lt;/a&gt; provides a SQL-like query language that lets you aggregate and analyze metrics across multiple resources. The game-changer? You can create a &lt;strong&gt;single alarm&lt;/strong&gt; that monitors &lt;strong&gt;all&lt;/strong&gt; your Lambda functions at once.&lt;/p&gt;
&lt;p&gt;Here&apos;s how it works: instead of creating individual alarms per function, you write a Metrics Insights query that groups your Lambda functions by tags and monitors them collectively. When any function breaches your threshold, CloudWatch identifies which specific function triggered the alarm through contributor attributes.&lt;/p&gt;
&lt;h3&gt;Tag-Based Filtering&lt;/h3&gt;
&lt;p&gt;The key to this approach is resource tagging. You tag your Lambda functions based on their monitoring requirements:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Tag functions that need high-priority error monitoring
cdk.Tags.of(sampleFunction).add(&apos;errorMetric&apos;, &apos;high&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then your Metrics Insights query targets only the tagged functions:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SELECT SUM(Errors)
FROM &quot;AWS/Lambda&quot;
WHERE tag.&quot;errorMetric&quot; = &apos;high&apos;
GROUP BY tag.&quot;aws:cloudformation:logical-id&quot;
ORDER BY SUM() DESC
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This query:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Sums all errors from Lambda functions tagged with &lt;code&gt;errorMetric=high&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Groups results by CloudFormation logical ID (identifies which function)&lt;/li&gt;
&lt;li&gt;Orders by error count (worst offenders first)&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Architecture Overview&lt;/h2&gt;
&lt;p&gt;The architecture is refreshingly simple compared to traditional per-function monitoring:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/cloudwatch-metrics-insights-architecture.svg&quot; alt=&quot;CloudWatch Metrics Insights Architecture&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Instead of 300 individual alarms (100 functions × 3 alarm types), you maintain just 3 alarms:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One for high-priority errors&lt;/li&gt;
&lt;li&gt;One for throttling&lt;/li&gt;
&lt;li&gt;One for duration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each alarm uses a Metrics Insights query to monitor all relevant functions simultaneously. When an alarm triggers, CloudWatch provides contributor insights showing exactly which function caused the breach.&lt;/p&gt;
&lt;h2&gt;Beyond Lambda: Universal Pattern&lt;/h2&gt;
&lt;p&gt;While I&apos;ve focused on Lambda functions here, this pattern works for &lt;strong&gt;any&lt;/strong&gt; AWS service that publishes metrics to CloudWatch. You could:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Monitor error rates across multiple API Gateway REST APIs&lt;/li&gt;
&lt;li&gt;Track DynamoDB throttling across all tables in a specific environment&lt;/li&gt;
&lt;li&gt;Aggregate ECS task failures by deployment group&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The pattern remains the same: tag your resources, write a Metrics Insights query filtering by those tags, and create a single alarm that monitors them all.&lt;/p&gt;
&lt;h2&gt;Try it Yourself&lt;/h2&gt;
&lt;p&gt;Want to try it yourself? Check out the &lt;a href=&quot;https://github.com/sh-cloud-software/cloudwatch-metrics-insights-query-alarm-cdk-example&quot;&gt;complete working example on GitHub&lt;/a&gt; with deployment instructions and test cases.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;💡Before you try it, ensure you have enabled &lt;a href=&quot;https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/EnableResourceTagsOnTelemetry.html&quot;&gt;resource tags on telemetry data&lt;/a&gt; in your AWS CloudWatch settings.
Also, it may take a few moments until the resource tags are available in CloudWatch. Your metric will not show any results until then.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;CloudWatch Metrics Insights transforms how you approach monitoring at scale. Instead of creating hundreds of individual alarms that consume your CloudFormation resource budget, you create a few powerful queries that dynamically monitor tagged resources.&lt;/p&gt;
&lt;p&gt;This approach offers several advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Resource efficiency&lt;/strong&gt;: Drastically reduces CloudFormation resource consumption&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure-as-code compliant&lt;/strong&gt;: No external automation functions needed&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flexible querying&lt;/strong&gt;: SQL-like syntax with aggregations and filtering&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Add new Lambda functions without touching alarm definitions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you&apos;re building large serverless applications or managing multiple Lambda functions, CloudWatch Metrics Insights should be in your monitoring toolkit. It&apos;s particularly valuable when combined with other monitoring best practices &lt;a href=&quot;/2018/10/07/remove-old-cloudwatch-log-groups-of-lambda-function/&quot;&gt;like cleaning up old CloudWatch log groups&lt;/a&gt; and &lt;a href=&quot;/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/&quot;&gt;optimizing your CDK constructs&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Have you already tested it? Reach out to me on &lt;a href=&quot;https://www.linkedin.com/in/sebastian-hesse&quot;&gt;LinkedIn&lt;/a&gt; to share your experience!&lt;/p&gt;
</content:encoded></item><item><title>Serve Markdown for LLMs and AI Agents Using Amazon CloudFront</title><link>https://www.sebastianhesse.de/2026/02/14/serve-markdown-for-llms-using-cloudfront/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2026/02/14/serve-markdown-for-llms-using-cloudfront/</guid><description>Learn how to serve Markdown to LLM and AI agent clients while keeping HTML for human visitors, using CloudFront Functions, Lambda, and S3 — the AWS equivalent of Cloudflare&apos;s &apos;Markdown for Agents&apos; feature.</description><pubDate>Sat, 14 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;LLMs and AI agents are increasingly browsing the web to gather information, answer questions, and complete tasks. But these clients don&apos;t need fancy HTML layouts, stylesheets, or JavaScript. They work best with clean, structured Markdown. Cloudflare recently &lt;a href=&quot;https://blog.cloudflare.com/markdown-for-agents/&quot;&gt;introduced a feature called &quot;Markdown for Agents&quot;&lt;/a&gt; that automatically serves Markdown to AI clients. But what if you&apos;re running your infrastructure on AWS?&lt;/p&gt;
&lt;p&gt;In this post, I&apos;ll walk you through the concept of building the same capability using &lt;strong&gt;Amazon CloudFront&lt;/strong&gt;, &lt;strong&gt;S3&lt;/strong&gt;, and &lt;strong&gt;Lambda&lt;/strong&gt; and the &lt;strong&gt;AWS CDK&lt;/strong&gt;. The full, deployable example project is available on &lt;a href=&quot;https://github.com/sh-cloud-software/cloudfront-markdown-for-llms&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;The Concept: Content Negotiation&lt;/h2&gt;
&lt;p&gt;The core idea is &lt;strong&gt;content negotiation&lt;/strong&gt; via the HTTP &lt;code&gt;Accept&lt;/code&gt; header. When a client makes a request, it tells the server what content types it can handle. Browsers typically send &lt;code&gt;Accept: text/html&lt;/code&gt;, while an LLM client usually sends &lt;code&gt;Accept: text/markdown&lt;/code&gt; nowadays (see CloudFlare&apos;s blog post for details).&lt;/p&gt;
&lt;p&gt;By inspecting this header at the CDN level, we can route each request to the right version of the content:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Human visitor&lt;/strong&gt; (browser) sends &lt;code&gt;Accept: text/html&lt;/code&gt; → receives the normal HTML page&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;LLM or AI agent&lt;/strong&gt; sends &lt;code&gt;Accept: text/markdown&lt;/code&gt; → receives a clean Markdown version&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is completely transparent to existing users. Browsers continue to get HTML as before. Only clients that explicitly request &lt;code&gt;text/markdown&lt;/code&gt; receive the Markdown version.&lt;/p&gt;
&lt;h2&gt;Architecture Overview&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/serve-markdown-for-llms-architecture.svg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The solution consists of two parts: &lt;strong&gt;request routing&lt;/strong&gt; and &lt;strong&gt;content generation&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Request routing&lt;/strong&gt; happens at the edge. A CloudFront Function inspects the &lt;code&gt;Accept&lt;/code&gt; header on every viewer request. If the client requests &lt;code&gt;text/markdown&lt;/code&gt;, the function rewrites the URI — for example, changing &lt;code&gt;/about.html&lt;/code&gt; to &lt;code&gt;/about.md&lt;/code&gt; or &lt;code&gt;/&lt;/code&gt; to &lt;code&gt;/index.md&lt;/code&gt;. Non-HTML file extensions like &lt;code&gt;.css&lt;/code&gt;, &lt;code&gt;.js&lt;/code&gt;, or &lt;code&gt;.png&lt;/code&gt; are left untouched. If the header doesn&apos;t contain &lt;code&gt;text/markdown&lt;/code&gt;, the request passes through unchanged.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Content generation&lt;/strong&gt; happens asynchronously. Whenever an HTML file is uploaded to S3, an S3 event notification triggers a Lambda function. This function reads the HTML, converts it to Markdown using the &lt;a href=&quot;https://github.com/mixmark-io/turndown&quot;&gt;turndown&lt;/a&gt; library, and writes the resulting &lt;code&gt;.md&lt;/code&gt; file back to S3 at the same path but with a different extension. So &lt;code&gt;about.html&lt;/code&gt; automatically gets a sibling &lt;code&gt;about.md&lt;/code&gt;. The Lambda includes a safety check to skip files that are already &lt;code&gt;.md&lt;/code&gt;, preventing infinite trigger loops.&lt;/p&gt;
&lt;h2&gt;Why Pre-Generation?&lt;/h2&gt;
&lt;p&gt;You might wonder: why not convert HTML to Markdown on-the-fly using Lambda@Edge or CloudFront Functions?&lt;/p&gt;
&lt;p&gt;The answer is a CloudFront limitation: &lt;strong&gt;neither CloudFront Functions nor Lambda@Edge can access the origin&apos;s response body&lt;/strong&gt;. They can only set a new body or manipulate headers and the request URI.&lt;/p&gt;
&lt;p&gt;On-the-fly conversion is possible but requires you to load the content yourself from S3. This decreases performance and adds up to the latency.&lt;/p&gt;
&lt;p&gt;Therefore, pre-generation turns out to be the better approach:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Zero conversion latency&lt;/strong&gt; at request time — the Markdown is already in S3, ready to serve&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simpler caching&lt;/strong&gt; — each file is a separate S3 object with its own cache entry&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No compute cost per request&lt;/strong&gt; — conversion happens once at upload time, not on every request&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Testing It&lt;/h2&gt;
&lt;p&gt;You can find all the implementation details — the CloudFront Function, the Lambda handler, and the CDK stack — in the &lt;a href=&quot;https://github.com/sh-cloud-software/cloudfront-markdown-for-llms&quot;&gt;GitHub repository&lt;/a&gt;. By the way, if you&apos;re interested in how you can bundle a Lambda function within a CDK construct, check out my post on &lt;a href=&quot;/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/&quot;&gt;5 ways to bundle a Lambda function within an AWS CDK construct&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After deploying the stack, you can verify the content negotiation with &lt;code&gt;curl&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# HTML response (default browser behavior)
curl https://your-distribution.cloudfront.net/

# Markdown response (what an LLM client would send)
curl -H &quot;Accept: text/markdown&quot; https://your-distribution.cloudfront.net/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first request returns the normal HTML page. The second returns clean Markdown.&lt;/p&gt;
&lt;h2&gt;When Is This Useful?&lt;/h2&gt;
&lt;p&gt;This pattern is valuable whenever you want AI agents to efficiently consume your web content: documentation sites, API reference pages, knowledge bases, or corporate websites. Instead of forcing LLMs to parse messy HTML and strip away navigation, ads, and scripts, you serve them exactly the structured content they need.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;With a CloudFront Function for request routing, a Lambda function for HTML-to-Markdown conversion, and a cache policy that includes the &lt;code&gt;Accept&lt;/code&gt; header, you can replicate Cloudflare&apos;s &quot;Markdown for Agents&quot; feature entirely on AWS. The pre-generation approach keeps things simple and adds zero latency at request time.&lt;/p&gt;
&lt;p&gt;The full CDK project is on &lt;a href=&quot;https://github.com/sh-cloud-software/cloudfront-markdown-for-llms&quot;&gt;GitHub&lt;/a&gt; — deploy it, try it out, and adapt it to your use case. If you&apos;re interested in more serverless patterns on AWS, check out my &lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;introduction to serverless&lt;/a&gt; or &lt;a href=&quot;/2020/03/31/going-serverless-why-and-how-2/&quot;&gt;best practices for developing AWS Lambda functions&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Running Scripts Across Multiple AWS Accounts with AWS SSO</title><link>https://www.sebastianhesse.de/2026/02/11/run-script-across-multiple-aws-accounts/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2026/02/11/run-script-across-multiple-aws-accounts/</guid><description>Execute AWS CLI commands across multiple AWS accounts and regions from your local machine using AWS SSO.</description><pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Managing multiple AWS accounts is common in organizations following best practices. You might have separate accounts for development, staging, and production, or multiple sandbox accounts for different teams. But what happens when you need to run the same AWS CLI command across many of those accounts?&lt;/p&gt;
&lt;p&gt;In this post, I&apos;ll show you how to use a bash script with AWS SSO to execute commands across multiple AWS accounts and regions from your local machine.&lt;/p&gt;
&lt;h2&gt;🎯 The Challenge&lt;/h2&gt;
&lt;p&gt;When working with &lt;a href=&quot;/2018/02/03/creating-different-aws-cloudformation-environments/&quot;&gt;multiple AWS environments&lt;/a&gt;, you often need to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Check resource configurations across all accounts&lt;/li&gt;
&lt;li&gt;Deploy or update resources consistently&lt;/li&gt;
&lt;li&gt;Audit settings or gather information&lt;/li&gt;
&lt;li&gt;Clean up resources across environments&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Doing this manually means:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Running &lt;code&gt;aws sso login&lt;/code&gt; for each profile&lt;/li&gt;
&lt;li&gt;Switching between profiles using &lt;code&gt;--profile&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Repeating commands for each region using &lt;code&gt;--region&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Manually tracking success and failures&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This process is tedious, error-prone, and time-consuming.&lt;/p&gt;
&lt;h2&gt;🛠️ The Solution&lt;/h2&gt;
&lt;p&gt;Here&apos;s a bash script that automates running AWS CLI commands across multiple accounts and regions. It handles authentication, execution, and provides clear feedback on success or failure.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash

set -euo pipefail

# Configuration
PROFILES=(&quot;product-dev&quot; &quot;product-test&quot; &quot;product-prod&quot; &quot;sandbox-one&quot; &quot;sandbox-two&quot;)
REGIONS=(&quot;eu-central-1&quot; &quot;eu-west-1&quot;)

# Colors for output
RED=&apos;\033[0;31m&apos;
GREEN=&apos;\033[0;32m&apos;
YELLOW=&apos;\033[1;33m&apos;
NC=&apos;\033[0m&apos;

log_info() {
    echo -e &quot;${GREEN}[INFO]${NC} $1&quot;
}

log_error() {
    echo -e &quot;${RED}[ERROR]${NC} $1&quot;
}

log_warning() {
    echo -e &quot;${YELLOW}[WARNING]${NC} $1&quot;
}

check_credentials() {
    local profile=$1

    if aws sts get-caller-identity \
        --profile &quot;${profile}&quot; \
        --no-cli-pager &amp;amp;&amp;gt;/dev/null; then
        return 0
    else
        return 1
    fi
}

execute_aws_cli_command() {
    local profile=$1
    local region=$2

    log_info &quot;Processing profile: ${profile}, region: ${region}&quot;

    # Replace this with your actual AWS CLI command
    if aws sts get-caller-identity \
        --profile &quot;${profile}&quot; \
        --region &quot;${region}&quot; \
        --no-cli-pager 2&amp;gt;/dev/null; then
        log_info &quot;✓ Successfully ran AWS CLI command for ${profile} in ${region}&quot;
        return 0
    else
        log_error &quot;✗ Failed to run AWS CLI command for ${profile} in ${region}&quot;
        return 1
    fi
}

main() {
    local total=0
    local success=0
    local failed=0

    log_info &quot;Running AWS CLI command across multiple AWS accounts and regions...&quot;
    log_info &quot;Profiles: ${PROFILES[*]}&quot;
    log_info &quot;Regions: ${REGIONS[*]}&quot;
    echo &quot;&quot;

    for profile in &quot;${PROFILES[@]}&quot;; do
        if ! check_credentials &quot;${profile}&quot;; then
            log_warning &quot;No valid credentials for profile ${profile}, starting login procedure...&quot;
            aws sso login --profile &quot;${profile}&quot;
        fi

        for region in &quot;${REGIONS[@]}&quot;; do
            ((total++))
            if execute_aws_cli_command &quot;${profile}&quot; &quot;${region}&quot;; then
                ((success++))
            else
                ((failed++))
            fi
        done
    done

    echo &quot;&quot;
    log_info &quot;Summary: Total=${total}, Success=${success}, Failed=${failed}&quot;

    if [ ${failed} -gt 0 ]; then
        exit 1
    fi
}

main
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;🔧 How It Works&lt;/h2&gt;
&lt;h3&gt;1. Configuration&lt;/h3&gt;
&lt;p&gt;The script starts by defining your AWS SSO profiles and target regions:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PROFILES=(&quot;product-dev&quot; &quot;product-test&quot; &quot;product-prod&quot;)
REGIONS=(&quot;eu-central-1&quot; &quot;eu-west-1&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2. Credential Check&lt;/h3&gt;
&lt;p&gt;Before executing commands, the script verifies that each profile has valid credentials:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;check_credentials() {
    local profile=$1
    if aws sts get-caller-identity --profile &quot;${profile}&quot; &amp;amp;&amp;gt;/dev/null; then
        return 0
    else
        return 1
    fi
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If credentials are missing or expired, it automatically triggers the AWS SSO login flow.&lt;/p&gt;
&lt;h3&gt;3. Command Execution&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;execute_aws_cli_command&lt;/code&gt; function runs your AWS CLI command for each profile-region combination. Replace &lt;code&gt;aws sts get-caller-identity&lt;/code&gt; with your actual command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Example: List all S3 buckets
aws s3 ls --profile &quot;${profile}&quot; --region &quot;${region}&quot;

# Example: Describe EC2 instances
aws ec2 describe-instances --profile &quot;${profile}&quot; --region &quot;${region}&quot;

# Example: Get CloudFormation stack status
aws cloudformation describe-stacks --profile &quot;${profile}&quot; --region &quot;${region}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;4. Summary Report&lt;/h3&gt;
&lt;p&gt;The script tracks execution statistics and provides a summary:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[INFO] Summary: Total=10, Success=10, Failed=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It exits with code 1 if any commands failed.&lt;/p&gt;
&lt;h2&gt;💡 Use Cases&lt;/h2&gt;
&lt;p&gt;This script is useful for:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resource Auditing&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Check security group configurations across all accounts&lt;/li&gt;
&lt;li&gt;List Lambda functions and their runtime versions&lt;/li&gt;
&lt;li&gt;Find untagged resources&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Compliance Checks&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Verify encryption settings on S3 buckets&lt;/li&gt;
&lt;li&gt;Check IAM password policies&lt;/li&gt;
&lt;li&gt;Audit CloudTrail configurations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Cost Management&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Identify unused resources across environments&lt;/li&gt;
&lt;li&gt;Check for unattached EBS volumes&lt;/li&gt;
&lt;li&gt;Find stopped EC2 instances (see my post on &lt;a href=&quot;/2018/04/22/shut-down-cloudformation-stack-resources-over-night-using-aws-lambda/&quot;&gt;shutting down resources overnight&lt;/a&gt; for cost savings)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Bulk Operations&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Update tags across all resources&lt;/li&gt;
&lt;li&gt;Enable AWS Config in all accounts&lt;/li&gt;
&lt;li&gt;Deploy infrastructure consistently&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;⚠️ Important Considerations&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;AWS SSO Configuration&lt;/strong&gt;
Make sure your &lt;code&gt;~/.aws/config&lt;/code&gt; file has profiles configured correctly:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[profile product-dev]
sso_start_url = https://your-org.awsapps.com/start
sso_region = eu-central-1
sso_account_id = 123456789012
sso_role_name = ReadOnlyAccess
region = eu-central-1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Rate Limiting&lt;/strong&gt;
AWS has API rate limits. For large numbers of accounts, consider adding delays:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sleep 0.5  # Add to execute_aws_cli_command function
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Error Handling&lt;/strong&gt;
The &lt;code&gt;set -euo pipefail&lt;/code&gt; directive makes the script fail fast. This is intentional - you want to know immediately if something goes wrong.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Permissions&lt;/strong&gt;
Ensure your SSO role has the necessary permissions for the commands you&apos;re executing. Test with a read-only command first.&lt;/p&gt;
&lt;h2&gt;🚀 Getting Started&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Save the script as &lt;code&gt;run-across-accounts.sh&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Make it executable: &lt;code&gt;chmod +x run-across-accounts.sh&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Update the &lt;code&gt;PROFILES&lt;/code&gt; and &lt;code&gt;REGIONS&lt;/code&gt; arrays&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;aws sts get-caller-identity&lt;/code&gt; with your command&lt;/li&gt;
&lt;li&gt;Run: &lt;code&gt;./run-across-accounts.sh&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let me know if you were able to solve your problem with this script!&lt;/p&gt;
</content:encoded></item><item><title>Importing DynamoDB Items from a CSV File Using the AWS CLI</title><link>https://www.sebastianhesse.de/2025/05/22/import-dynamodb-data-from-csv/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2025/05/22/import-dynamodb-data-from-csv/</guid><description>Easily re-import your DynamoDB items from a CSV file using a simple bash script and the AWS CLI — no complex tooling required.</description><pubDate>Thu, 22 May 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you&apos;ve exported items from a DynamoDB table into a CSV file and now want to import them back, you&apos;ll quickly realize that AWS doesn&apos;t offer a direct CSV import feature for DynamoDB. While you can use tools like AWS Glue or write custom applications, sometimes all you need is a small CLI-based solution.&lt;/p&gt;
&lt;p&gt;In this post, I&apos;ll walk you through how to use a bash script and the AWS CLI to re-import your data into DynamoDB.&lt;/p&gt;
&lt;h2&gt;🧪 Problem Context&lt;/h2&gt;
&lt;p&gt;I had a set of items in a DynamoDB table that I exported to a CSV file for backup and inspection. Each item had string fields with the following names:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;PK&lt;/code&gt; (Partition Key)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SK&lt;/code&gt; (Sort Key)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;createdAt&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;data&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The goal was to &lt;strong&gt;re-import these items into an existing DynamoDB table&lt;/strong&gt; using the AWS CLI.&lt;/p&gt;
&lt;h2&gt;📁 Sample CSV File&lt;/h2&gt;
&lt;p&gt;Here’s a sample of what the &lt;code&gt;data.csv&lt;/code&gt; looked like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PK,SK,createdAt,data
USER#123,SESSION#1,2025-05-06T12:00:00Z,Some data string
USER#124,SESSION#2,2025-05-06T13:00:00Z,Another string
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All values are strings, and the file includes a header row.&lt;/p&gt;
&lt;h2&gt;🛠️ The Script&lt;/h2&gt;
&lt;p&gt;Here’s a Bash script that reads each line of the CSV file and inserts the corresponding item into the DynamoDB table using the AWS CLI. It prints the result of each insertion to make it easier to debug or confirm progress.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash

TABLE_NAME=&quot;YourTableName&quot;
CSV_FILE=&quot;data.csv&quot;

awk &apos;NR &amp;gt; 1&apos; &quot;$CSV_FILE&quot; | while IFS=&apos;,&apos; read -r PK SK createdAt data; do
  echo &quot;Putting item: PK=$PK, SK=$SK&quot;

  result=$(aws dynamodb put-item \
    --table-name &quot;$TABLE_NAME&quot; \
    --item &quot;{
      \&quot;PK\&quot;: {\&quot;S\&quot;: \&quot;$PK\&quot;},
      \&quot;SK\&quot;: {\&quot;S\&quot;: \&quot;$SK\&quot;},
      \&quot;createdAt\&quot;: {\&quot;S\&quot;: \&quot;$createdAt\&quot;},
      \&quot;data\&quot;: {\&quot;S\&quot;: \&quot;$data\&quot;}
    }&quot; 2&amp;gt;&amp;amp;1)

  if [ $? -eq 0 ]; then
    echo &quot;✅ Successfully inserted PK=$PK&quot;
  else
    echo &quot;❌ Failed to insert PK=$PK&quot;
    echo &quot;Error: $result&quot;
  fi

  echo &quot;----------------------------------------&quot;
done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Replace &lt;code&gt;YourTableName&lt;/code&gt; with the name of your DynamoDB table.&lt;/p&gt;
&lt;h2&gt;⚠️ Common Pitfall&lt;/h2&gt;
&lt;p&gt;The exported CSV file might not contain a new line at the end.
Therefore, the last line of the CSV might not be processed by the script.
&lt;code&gt;awk&lt;/code&gt; should handle this edge case but if you&apos;re using a different tool, you can fix this by adding a new line to the end of the file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;echo &amp;gt;&amp;gt; data.csv
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;🚀 Ready to Go&lt;/h2&gt;
&lt;p&gt;This approach works great for small-to-medium CSVs where you don&apos;t want to spin up more complex tooling.
Just be mindful of CSV quirks and escaping needs (e.g., quoted strings or commas within fields), and you&apos;ll have your data back in DynamoDB in no time.&lt;/p&gt;
&lt;p&gt;For larger imports, consider batching writes using &lt;a href=&quot;https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/batch-write-item.html&quot;&gt;&lt;code&gt;batch-write-item&lt;/code&gt;&lt;/a&gt;, or using &lt;a href=&quot;/2017/06/24/5-things-consider-writing-lambda-function/&quot;&gt;AWS Lambda&lt;/a&gt; for managed processing. If you&apos;re building applications that interact with DynamoDB, check out my guide on &lt;a href=&quot;/2020/05/14/using-dynamodb-local-and-testcontainers-in-java-within-bitbucket-pipelines/&quot;&gt;testing DynamoDB operations locally&lt;/a&gt; with DynamoDB Local and Testcontainers.&lt;/p&gt;
</content:encoded></item><item><title>Migrating a CDK Construct to projen and jsii</title><link>https://www.sebastianhesse.de/2021/03/01/migrating-cdk-construct-to-projen-and-jsii/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2021/03/01/migrating-cdk-construct-to-projen-and-jsii/</guid><description>Learn how to convert your existing AWS CDK construct to use projen for easier maintenance and jsii for multi-language publishing</description><pubDate>Mon, 01 Mar 2021 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;CDK constructs are a great way to combine best practices and simplify your infrastructure code. Have you ever written your own CDK construct? Writing a construct is easy using the CDK CLI. But soon you&apos;ll discover the hard parts: Keeping all dependencies updated; Aligning the CDK dependency versions to not have version conflicts; And publishing your CDK construct to multiple repositories. &lt;code&gt;projen&lt;/code&gt; makes the CDK construct setup and maintenance a lot easier. And &lt;code&gt;jsii&lt;/code&gt; helps to release your TypeScript CDK construct to Java, Python and C#. Here is a short tutorial about migrating a CDK construct to projen and jsii.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I have provided a beginner&apos;s step-by-step guide about &lt;a href=&quot;https://github.com/seeebiii/projen-test&quot;&gt;getting started with projen and jsii&lt;/a&gt;. This will help you exploring typical options for projen and explains the process of publishing your CDK Construct to repositories like NPM, Maven, PyPi, and NuGet using jsii. I recommend to read it if you have no prior experience with projen or jsii.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;About projen and jsii&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/projen/projen&quot;&gt;projen&lt;/a&gt; is a tool to write your project configuration using code instead of managing it yourself. It was created to help you writing CDK constructs. You only define your project configuration in a &lt;a href=&quot;https://github.com/seeebiii/projen-test/blob/main/.projenrc.js&quot;&gt;.projenrc.js&lt;/a&gt; file and it will generate your project files for you. This includes a &lt;code&gt;package.json&lt;/code&gt; and other configuration files for &lt;code&gt;eslint&lt;/code&gt; or GitHub Actions. It is designed to automate all the boring project setup steps.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/aws/jsii&quot;&gt;jsii&lt;/a&gt; is the technology behind the AWS CDK that allows you to write CDK Constructs in TypeScript/JavaScript and compile them to other languages like Java or Python. There&apos;s a good &lt;a href=&quot;https://aws.amazon.com/blogs/opensource/generate-python-java-dotnet-software-libraries-from-typescript-source/&quot;&gt;AWS blog post&lt;/a&gt; about how it works.&lt;/p&gt;
&lt;h2&gt;Setup Your Construct Project&lt;/h2&gt;
&lt;p&gt;Let&apos;s start to migrate your existing CDK Construct to projen. You have probably created your CDK construct using the following &lt;code&gt;cdk&lt;/code&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cd my-cdk-construct
cdk init lib --language=typescript
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That command creates various folders and files with pre-defined defaults writing a CDK construct using TypeScript. For example, it initializes your project with a &lt;code&gt;lib&lt;/code&gt; and &lt;code&gt;test&lt;/code&gt; folder as well as an example construct file and a related test file. Finally, you can run your tests using &lt;a href=&quot;https://jestjs.io/&quot;&gt;Jest&lt;/a&gt; with &lt;code&gt;npm run test&lt;/code&gt;. This is the basis of the next steps, even though you might have added further lib files, tests or tools like &lt;a href=&quot;https://eslint.org/&quot;&gt;ESLint&lt;/a&gt; or similar.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Please note&lt;/strong&gt; that you can only use &lt;code&gt;projen&lt;/code&gt; and &lt;code&gt;jsii&lt;/code&gt; with TypeScript/JavaScript as the source language at the moment. This means, you can not create a CDK construct in Java and publish it to NPM.&lt;/p&gt;
&lt;h2&gt;Migrating Your CDK Construct to projen&lt;/h2&gt;
&lt;p&gt;💡 Before you start migrating your CDK construct to projen, you might want to open this blog post about &lt;a href=&quot;https://www.matthewbonig.com/2020/10/04/converting-to-projen/&quot;&gt;converting your CDK construct to using projen by Matthew Bonig&lt;/a&gt; in another browser tab. His article also covers some interesting facts and small errors he has experienced in the process, so they might help you as well.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Before you execute any command, make sure you have a &lt;strong&gt;clean Git status&lt;/strong&gt;. This ensures that you can easily revert any undesired changes.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The following commands will initialize your project with &lt;code&gt;projen&lt;/code&gt;. It will create a file called &lt;code&gt;.projenrc.js&lt;/code&gt; containing a default &lt;code&gt;projen&lt;/code&gt; configuration. Then it will automatically execute this file for you to generate further files and folders based on your &lt;code&gt;projen&lt;/code&gt; configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cd my-cdk-construct
npx projen new awscdk-construct
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After executing the commands, you&apos;ll see lots of changes in your Git status. For example, &lt;code&gt;projen&lt;/code&gt; expects source files in &lt;code&gt;src&lt;/code&gt; whereas a CDK construct initialized via the CDK CLI expects them to live in &lt;code&gt;lib&lt;/code&gt;. Also, there will be a folder called &lt;code&gt;.projen&lt;/code&gt; containing configuration files for &lt;code&gt;projen&lt;/code&gt;. You don&apos;t need to look into all of the new files immediately. &lt;code&gt;projen&lt;/code&gt; is managing the contents of files like GitHub Actions or TypeScript configurations based on the settings you define in &lt;code&gt;.projenrc.js&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Updating Your Project Files with projen&lt;/h3&gt;
&lt;p&gt;After migrating your CDK construct to &lt;code&gt;projen&lt;/code&gt;, you should always follow this process to update project files generated by &lt;code&gt;projen&lt;/code&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Make according changes in &lt;code&gt;.projenrc.js&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;npx projen&lt;/code&gt; and &lt;code&gt;projen&lt;/code&gt; will synthesize the changes to all project files. 💡 Hint: create an alias like &lt;code&gt;pj&lt;/code&gt; in your command line to avoid typing &lt;code&gt;npx projen&lt;/code&gt; over and over again.&lt;/li&gt;
&lt;li&gt;Never manually edit the generated files because &lt;code&gt;projen&lt;/code&gt; will overwrite them the next time!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you have used tools like &lt;code&gt;eslint&lt;/code&gt; or &lt;a href=&quot;https://dependabot.com/&quot;&gt;Dependabot&lt;/a&gt; before, you&apos;ll probably see changes in their related files as well. If you don&apos;t like the defaults, you can always change the settings in &lt;code&gt;.projenrc.js&lt;/code&gt;. However, you&apos;ll probably don&apos;t have to change that much since projen is applying common settings for eslint. I only noticed &lt;strong&gt;two changes compared to my settings&lt;/strong&gt;: include an uppercase &apos;i&apos; (= I) in interface names and make its properties readonly. Everything else was just related to some minor style adjustments.&lt;/p&gt;
&lt;p&gt;💡 Since I don&apos;t know all varieties of CDK construct setups, you might run into other errors after adding &lt;code&gt;projen&lt;/code&gt;. If you have any problems you can&apos;t solve by yourself, feel free to reach out to me or send your question to the &lt;a href=&quot;https://cdk.dev/&quot;&gt;CDK developers Slack&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Important General Settings&lt;/h3&gt;
&lt;p&gt;Now it&apos;s a good time to have a look at the most important settings you can set in &lt;code&gt;.projenrc.js&lt;/code&gt;. I&apos;ll outline them below and explain them in case it&apos;s not too obvious.&lt;/p&gt;
&lt;p&gt;&amp;lt;table&amp;gt;&amp;lt;tbody&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Setting&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Description&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;projectType&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Define if this project is a library or app. For a library (i.e. CDK construct), you should use &amp;lt;code&amp;gt;ProjectType.LIB&amp;lt;/code&amp;gt;.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;packageManager&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Define which package manager you want to use, e.g. Yarn or NPM. Set the value like &amp;lt;code&amp;gt;NodePackageManager.NPM&amp;lt;/code&amp;gt;.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;cdkVersion&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;With this property you are defining the CDK version you want to use for all CDK dependencies. Due to the nature of CDK, it&apos;s important to align the numbers.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;cdkDependencies&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Enter all the CDK dependencies that you are using in your construct. You probably had them defined in your package.json file before but now you need to add them here. Add them like &amp;lt;code&amp;gt;[&apos;@aws-cdk/core&apos;, &apos;...&apos;]&amp;lt;/code&amp;gt;. Since you specify the CDK version with &amp;lt;code&amp;gt;cdkVersion&amp;lt;/code&amp;gt;, you don&apos;t need to set it here.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;bundledDeps&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;List of dependencies you want to bundle with your construct. This might be necessary in case you use some constructs like &amp;lt;code&amp;gt;NodejsFunction&amp;lt;/code&amp;gt; for a Lambda function. It will build your Lambda function&apos;s code only when the Lambda function is used in a CDK stack. And then those dependencies need to be available.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;deps&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Any other regular dependencies that your construct is using.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;peerDeps&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;A list of peer dependencies. &amp;lt;code&amp;gt;projen&amp;lt;/code&amp;gt; will automatically add all &amp;lt;code&amp;gt;@aws-cdk&amp;lt;/code&amp;gt; dependencies to the list of peer dependencies, so you don&apos;t need to define them here. But you can add any other dependencies here that you&apos;d like to have in the resulting &amp;lt;code&amp;gt;peerDependencies&amp;lt;/code&amp;gt; of the &amp;lt;code&amp;gt;package.json&amp;lt;/code&amp;gt; file.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;packageName&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;By default, &amp;lt;code&amp;gt;projen&amp;lt;/code&amp;gt; will take the &amp;lt;code&amp;gt;name&amp;lt;/code&amp;gt; property as the name in &amp;lt;code&amp;gt;package.json&amp;lt;/code&amp;gt;. However, it might be necessary to overwrite it here.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;gitignore,&amp;lt;br&amp;gt;npmignore&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Should be self explanatory. You can define a list of strings matching files or folders you&apos;d like to ignore for Git or NPM.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/tbody&amp;gt;&amp;lt;/table&amp;gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Don&apos;t forget to run &lt;code&gt;npx projen&lt;/code&gt; each time you make a change in &lt;code&gt;.projenrc.js&lt;/code&gt;. Otherwise &lt;code&gt;projen&lt;/code&gt; won&apos;t synthesize the config changes to your project files.&lt;/p&gt;
&lt;h3&gt;Release Settings&lt;/h3&gt;
&lt;p&gt;You have to consider that by default &lt;code&gt;projen&lt;/code&gt; assumes you want to publish your CDK construct to NPM. Hence, it sets up GitHub Actions that perform the relevant tasks for you, like bundling, running the tests and releasing the artifact. If you want to skip this for now, consider adjusting the following settings:&lt;/p&gt;
&lt;p&gt;&amp;lt;table&amp;gt;&amp;lt;tbody&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Setting&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Description&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;releaseBranches&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;A list of branch names. The default is &amp;lt;code&amp;gt;[&apos;main&apos;]&amp;lt;/code&amp;gt; but you can adjust it to whatever branches you need. If you set this to an empty array, then you can only trigger a release by manually starting the release workflow (in case you enabled to add a release workflow, see below).&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;releaseEveryCommit&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;If set to &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; and a commit is added on any of the branches defined in &amp;lt;code&amp;gt;releaseBranches&amp;lt;/code&amp;gt;, then a release is triggered.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;releaseSchedule&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Optional: cron expression to regularly release a new version.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;releaseToNpm&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;If a new version should be released to NPM using GitHub Actions.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;releaseWorkflow&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;If you want to have a release workflow using GitHub Actions.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/tbody&amp;gt;&amp;lt;/table&amp;gt;&lt;/p&gt;
&lt;p&gt;Again, don&apos;t forget to run &lt;code&gt;npx projen&lt;/code&gt; to update your project files after changing the &lt;code&gt;projen&lt;/code&gt; settings in &lt;code&gt;.projenrc.js&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Location of Lambda Function Code&lt;/h3&gt;
&lt;p&gt;In case you are creating a Lambda function in your CDK construct, you might wonder where you should put your Lambda function code and how you bundle it. When using &lt;code&gt;projen&lt;/code&gt;, I&apos;d say a good approach is to keep all the Lambda function code in a subfolder next to your CDK construct source files. For example, the code could go into &lt;code&gt;src/lambda&lt;/code&gt;. This ensures that &lt;code&gt;projen&lt;/code&gt; picks up the files and includes them in the artifact that&apos;s being released to NPM. You can read my other blog post if you want to know more ways about &lt;a href=&quot;/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/&quot;&gt;how to bundle your Lambda function code in a CDK construct&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Building Your Construct&lt;/h3&gt;
&lt;p&gt;Now that you have finally adjusted the necessary &lt;code&gt;projen&lt;/code&gt; settings and fixed any code or linting issues, you can verify that your code compiles. You can use commands like &lt;code&gt;npm run build&lt;/code&gt; or &lt;code&gt;npm run test&lt;/code&gt; to verify your code. (Use &lt;code&gt;Yarn&lt;/code&gt; instead if you kept &lt;code&gt;projen&lt;/code&gt;&apos;s default setting of &lt;code&gt;packageManager&lt;/code&gt;) Is your code compiling? Great! If not, did you forget anything or did I miss to cover anything here? Let me know about it!&lt;/p&gt;
&lt;p&gt;After a green build, you are ready to commit your changes and push them to your remote Git repository. Then, the GitHub actions configured under &lt;code&gt;.github/actions&lt;/code&gt; will trigger and build (+ release) your CDK construct 🚀&lt;/p&gt;
&lt;h3&gt;Custom Build or Workflow Steps&lt;/h3&gt;
&lt;p&gt;Before migrating your CDK construct to &lt;code&gt;projen&lt;/code&gt;, you probably had some custom scripts or commands in your project. For example, you prepared certain things for a Lambda function or your CDK construct in general. With &lt;code&gt;projen&lt;/code&gt; you can add custom build and workflow steps as well. Here is how you can do it in &lt;code&gt;.projenrc.js&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const project = new AwsCdkConstructLibrary({...});

// add a build step:
project.buildTask.exec(&apos;echo &quot;Hello World&quot;&apos;);

// add a job to the GitHub Action file of release.yml:
project.releaseWorkflow.addJobs({
  example: {
    name: &apos;Example Job&apos;,
    &apos;runs-on&apos;: &apos;ubuntu-latest&apos;,
    steps: [{...}],
  },
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As soon as update the project files by running &lt;code&gt;npx projen&lt;/code&gt; again, you&apos;ll see changes in &lt;code&gt;.projen/tasks.json&lt;/code&gt; and &lt;code&gt;.github/actions/release.yml&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Publishing to Multiple Repositories Using jsii&lt;/h2&gt;
&lt;p&gt;After you have finally migrated your code, it&apos;s time to add some magic to your CDK construct. The magic comes by a combination of &lt;code&gt;projen&lt;/code&gt; and &lt;code&gt;jsii&lt;/code&gt;. Their biggest advantage is that you can make your CDK construct available not only to NPM but also Maven, PyPi and NuGet by just setting a few settings.&lt;/p&gt;
&lt;p&gt;As &lt;a href=&quot;#release-settings&quot;&gt;described above&lt;/a&gt;, &lt;code&gt;projen&lt;/code&gt; provides a few release settings. Use them to publish your CDK construct to NPM. If you want to publish your CDK construct to Maven, PyPi or NuGet, then consider the following settings:&lt;/p&gt;
&lt;p&gt;&amp;lt;table&amp;gt;&amp;lt;tbody&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Setting&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Repository&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Description&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;publishToMaven&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Maven&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Allows you to configure the Java package, group id and artifact id.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;publishToPypi&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PyPi&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Allows you to configure the dist name and module.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;publishToNuget&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;NuGet&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;Allows you to configure the namespace and package id.&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/tbody&amp;gt;&amp;lt;/table&amp;gt;&lt;/p&gt;
&lt;p&gt;Enabling these settings (and running &lt;code&gt;npx projen&lt;/code&gt; again 😉) will create appropriate steps in the &lt;code&gt;release.yml&lt;/code&gt; GitHub Action. You can find detailed steps for each repository in my guide about &lt;a href=&quot;https://github.com/seeebiii/projen-test#publishing-to-different-repositories&quot;&gt;creating a CDK construct using projen and jsii on GitHub&lt;/a&gt;. Besides that, you don&apos;t need to do anything. Just commit your changes and wait for the magic to happen!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/projen-github-action-success.png&quot; alt=&quot;Successful CDK construct release using projen and jsii.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;A successful release using projen and jsii. Example taken from &lt;a href=&quot;https://github.com/seeebiii/projen-test&quot;&gt;projen-test&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I hope you were successful in migrating your existing CDK construct to &lt;code&gt;projen&lt;/code&gt; and &lt;code&gt;jsii&lt;/code&gt;. In my opinion it offers a really good way to manage your project and hide quite a few complicated steps. Also, shipping to various repositories works like a charm!&lt;/p&gt;
&lt;p&gt;Are you running into any errors? Comment below and let me see if I can help you 😊&lt;/p&gt;
</content:encoded></item><item><title>Using Spring Boot On AWS Lambda: Clever or Dumb?</title><link>https://www.sebastianhesse.de/2021/02/14/using-spring-boot-on-aws-lambda-clever-or-dumb/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2021/02/14/using-spring-boot-on-aws-lambda-clever-or-dumb/</guid><description>Should you run Spring Boot on AWS Lambda? Detailed analysis of advantages, disadvantages, cold start impact, and GraalVM alternatives for Java serverless functions.</description><pubDate>Sun, 14 Feb 2021 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I often notice people wondering if and how it&apos;s possible to run Spring Boot on AWS Lambda Functions. I understand this because I know developers in the Java ecosystem often use the Spring Framework. And since more and more people built Lambda functions using Java, the question will be asked at some point. It&apos;s definitely possible to run Spring Boot on AWS Lambda. However, if you think about it a second time, you&apos;ll find many reasons why it is a bad idea. This blog post discusses the advantages and disadvantages of running Spring Boot on AWS Lambda on a conceptual level. And in case I won&apos;t convince you, I&apos;ll at least provide you some hints to overcome the disadvantages.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Are you using Java or even Spring (Boot) on AWS Lambda? I have used it in production systems and I believe it&apos;s good for certain use cases like background data processing. What is your use case? Do you try to overcome the disadvantages and if so, how? Just add a comment below or mention me on &lt;a href=&quot;https://twitter.com/seeebiii&quot;&gt;Twitter&lt;/a&gt; 👍&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Some Context Around Spring Boot And AWS Lambda&lt;/h2&gt;
&lt;p&gt;As you all probably know (&lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;if not, then read here&lt;/a&gt;), AWS Lambda and similar Function-as-a-Service (FaaS) solutions let you run code in the cloud without managing any of the underlying infrastructure. This enables a cloud provider to better scale your code because these functions are a small enough to be started on demand. Although Java is one of the supported languages with a &lt;a href=&quot;https://blog.symphonia.io/posts/2020-06-30_analyzing_cold_start_latency_of_aws_lambda&quot;&gt;slower initial start time (= cold start)&lt;/a&gt;, it can still be a good choice for AWS Lambda because of its big ecosystem that grew over all the years.&lt;/p&gt;
&lt;p&gt;For example, the Spring Framework is one of the most popular Java frameworks. It collects best practices working with different things like a database and HTTP. Eventually Spring Boot made it even simpler to connect the different Spring modules together. A typical use case for Spring Boot is to build a microservice that can handle HTTP requests. You just define a &lt;code&gt;@Controller&lt;/code&gt; , give it a &lt;code&gt;@RequestMapping&lt;/code&gt; and have your first endpoint ready that can react to an HTTP request (read &lt;a href=&quot;https://www.baeldung.com/building-a-restful-web-service-with-spring-and-java-based-configuration&quot;&gt;this blog post&lt;/a&gt; as an example).&lt;/p&gt;
&lt;h2&gt;Connecting AWS Lambda And Spring Boot&lt;/h2&gt;
&lt;p&gt;Unfortunately this setup won&apos;t work out of the box if you&apos;re running this code on AWS Lambda. The reason is that AWS Lambda is using an event-based mechanism. An event can consist of any kind of data that is serializable. However, you won&apos;t be able to receive HTTP requests from within your Lambda function. (Of course you can make HTTP requests to other services but not the other way around). This is the reason why you need a service or application in front of your AWS Lambda function. Such a service can receive HTTP requests and forward them to your Lambda function. A typical example is an &lt;a href=&quot;https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html&quot;&gt;AWS API Gateway&lt;/a&gt;. But you can also &lt;a href=&quot;https://docs.aws.amazon.com/lambda/latest/dg/services-alb.html&quot;&gt;use an AWS Application Load Balancer&lt;/a&gt;. These services keep the HTTP connection open until your Lambda function responds to the HTTP event. Without this setup you won&apos;t receive any HTTP events in your Lambda function.&lt;/p&gt;
&lt;p&gt;In order to start using Spring Boot on AWS Lambda, I suggest you to checkout the different helpers in &lt;a href=&quot;https://github.com/awslabs/aws-serverless-java-container&quot;&gt;aws-serverless-java-container&lt;/a&gt;. This is a collection of Java classes that help you using typical Java frameworks on AWS Lambda, including Spring. They already provide you with the necessary glue code to connect the AWS Lambda handler with Spring Boot. There are also other (but similar) ways to ingrate the Spring Framework like using &lt;a href=&quot;https://rieckpil.de/java-aws-lambda-with-spring-cloud-function/&quot;&gt;Spring Cloud Functions on AWS Lambda&lt;/a&gt;. In the end, the architecture looks similar to this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/spring-boot-lambda-architecture.jpeg&quot; alt=&quot;Architecture diagram showing HTTP request flow from API Gateway to Spring Boot running in AWS Lambda function&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Architecture how an HTTP request is received as an event from API Gateway to Spring Boot in your Lambda function.&lt;/p&gt;
&lt;h2&gt;Advantages&lt;/h2&gt;
&lt;p&gt;Let&apos;s discuss the advantages of this approach because there are definitely a few. First, you can setup an API Gateway to forward all HTTP requests as events using a proxy integration with AWS Lambda. This basically enables you to configure a wildcard path in API Gateway and lets your Spring Boot app handle all the internal routing. Using this approach, you and your team can continue using Spring Boot like you&apos;re used to. Second, as soon as one Lambda function instance has initialized the Spring Boot container, this function will respond in a consistent way. The performance is as expected from Java, at least that&apos;s my experience with Java Lambda functions.&lt;/p&gt;
&lt;p&gt;However, the most obvious advantage is that you don&apos;t have to manage the underlying instances with AWS Lambda. But this advantage has a catch: You need to ensure you are not storing any session data (or similar) inside your Spring Boot app. Remember that you should keep Lambda functions stateless, otherwise scaling becomes harder.&lt;/p&gt;
&lt;h2&gt;Disadvantages&lt;/h2&gt;
&lt;p&gt;You might have recognized already that using API Gateway + AWS Lambda + Spring Boot is duplicating responsibilities. In the world before Serverless you would have used Spring Boot alone to handle HTTP requests and implement your business logic. Now the combination of API Gateway and AWS Lambda is replacing it. Even more than that, it enables you to have better scalability because of this separation of concerns from an operational aspect. This means you can put a big question mark on the idea of running Spring Boot inside AWS Lambda.&lt;/p&gt;
&lt;p&gt;But not only duplicating responsibilities is a bad sign. Also the fact that you add unnecessary complexity to integrate Spring Boot with AWS Lambda should make you think twice. In the end, all these wrapper and helper classes introduce more potential for bugs. And together with Spring Boot they also have a bad effect on the cold start of your Lambda functions of course. Apart from that, Spring Boot own it&apos;s own is even too slow to start quickly like it&apos;s expected from Lambda functions. This means, you&apos;ll have massive spikes in your response times when a new Lambda function instance is started.&lt;/p&gt;
&lt;p&gt;Although not worrying about the instances of your Lambda function sounds appealing, you have to remember that you can not control how many you&apos;ll have. This can have negative consequences if you&apos;re e.g. making requests to an SQL database in direct response to external events. You might run out of database connections quickly if a spike hits your Lambda function because AWS Lambda spins up more and more instances.&lt;/p&gt;
&lt;h2&gt;My Personal Opinion And Experience&lt;/h2&gt;
&lt;p&gt;I (and many other Serverless developers) recommend to keep AWS Lambda functions simple and small. If you follow this recommendation, then your functions are a lot easier to scale. If you instead build another monolith using Spring Boot on top of AWS Lambda, then you are doing it wrong. Keep that in mind while developing AWS Lambda functions with Java but especially if you include Spring as well.&lt;/p&gt;
&lt;p&gt;Having said that, I have successfully used Spring in Java Lambda functions in previous projects. I enjoyed running them in the background doing some data processing. However, I&apos;m not convinced that they are a good choice for &quot;customer facing&quot; endpoints like a REST API. The only exception is if you keep a minimum of Lambda instances running. But then you can question if AWS Lambda is the right choice for your Spring Boot app. In such a case I&apos;d rather suggest you to use traditional EC2 instances or ECS.&lt;/p&gt;
&lt;p&gt;If you still want to use Sprint Boot on AWS Lambda, then you should consider some recent developments. For example, you might have heard of GraalVM or Quarkus. You can use it to &lt;a href=&quot;https://aws.amazon.com/blogs/architecture/field-notes-optimize-your-java-application-for-aws-lambda-with-quarkus/&quot;&gt;run Java native images on AWS Lambda&lt;/a&gt; by providing a custom runtime. Also, &lt;a href=&quot;https://github.com/spring-projects/spring-framework/wiki/GraalVM-native-image-support&quot;&gt;Spring is adding support for GraalVM&lt;/a&gt; native images. I can recommend looking into the Slides of &lt;a href=&quot;https://www.slideshare.net/VadymKazulkin/adopting-java-for-the-serverless-world-at-serverless-meetup-new-york-and-boston&quot;&gt;Vadym&apos;s talk &quot;Adopting Java for the Serverless world&quot;&lt;/a&gt; that goes into more detail.&lt;/p&gt;
</content:encoded></item><item><title>Serverless Sending and Receiving E-Mails, the CDK Way</title><link>https://www.sebastianhesse.de/2021/01/31/serverless-sending-and-receiving-e-mails-the-cdk-way/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2021/01/31/serverless-sending-and-receiving-e-mails-the-cdk-way/</guid><description>Automate email forwarding with AWS SES using CDK constructs. Verify domains, setup receipt rules, and forward emails to Gmail—all with Infrastructure as Code.</description><pubDate>Sun, 31 Jan 2021 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Serverless sending and receiving e-mails using AWS is not fun in my opinion. AWS offers Simple Email Service (SES) to achieve this. But the UI and also Infrastructure as Code (IaC) support is lacking. You often need to manually change settings which is error prone. When I recently built another landing page for myself, I was repeating the same steps as for the previous page. It bothered me why there&apos;s no easy automation that&apos;s doing it for me. This is what I&apos;m presenting to you today: My first AWS CDK Constructs to send and receive e-mails.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can find the source code of the AWS CDK Constructs in the &lt;a href=&quot;https://github.com/seeebiii/ses-email-forwarding&quot;&gt;ses-email-forwarding GitHub repository&lt;/a&gt;. Besides that, I also made &lt;a href=&quot;https://github.com/seeebiii/ses-verify-identities&quot;&gt;ses-verify-identities&lt;/a&gt; available as separate AWS CDK Constructs.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Setup Steps With AWS SES&lt;/h2&gt;
&lt;p&gt;Have you ever built your own landing page with its own domain? Or have you ever wanted to use another alias for receiving e-mails? Or did you just want to use an e-mail address for a domain you own without the hassle of setting up your own mail server and instead forward the mails to your existing inbox? If yes, I have some great news for you! 😊 Initially, I have used a library called &lt;a href=&quot;https://github.com/arithmetric/aws-lambda-ses-forwarder&quot;&gt;aws-lambda-ses-forwarder&lt;/a&gt; for serverless sending and receiving e-mails. I always had to follow these manual steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Setup AWS SES and &lt;a href=&quot;https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-getting-started-verify.html&quot;&gt;verify my domain&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Configure &lt;a href=&quot;https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-getting-started-receipt-rule.html&quot;&gt;receipt rules&lt;/a&gt; for the different e-mail addresses.&lt;/li&gt;
&lt;li&gt;Then &lt;a href=&quot;https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-action-lambda.html&quot;&gt;setup a Lambda function SES Action&lt;/a&gt; that forwards all my e-mails from SES to a Gmail address.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-smtp.html&quot;&gt;Configure SMTP for AWS SES&lt;/a&gt; and setup Gmail to send e-mails with my verified domain.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;My new AWS CDK constructs below automate the first three setup steps for AWS SES. They even allow you to automatically verify your e-mail addresses or domains within SES (if you&apos;re using Route53). Just deploy the constructs in your AWS CDK stack and you&apos;re ready to go. You only need to create SMTP credentials for AWS SES and setup your settings with Gmail (or other providers). Let&apos;s see how it&apos;s working ⬇️&lt;/p&gt;
&lt;h2&gt;AWS CDK Constructs to the Rescue&lt;/h2&gt;
&lt;p&gt;The best way to show how it&apos;s working is by showing you a few lines of code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;new EmailForwardingRuleSet(this, &apos;EmailForwardingRuleSet&apos;, {
  enableRuleSet: true,
  emailForwardingProps: [{
    domainName: &apos;example.org&apos;,
    verifyDomain: true,
    fromPrefix: &apos;noreply&apos;,
    emailMappings: [{
      receivePrefix: &apos;hello&apos;,
      targetEmails: [&apos;whatever+hello@provider.com&apos;]
    }]
  }]
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code is using the &lt;code&gt;EmailForwardingRuleSet&lt;/code&gt; construct to configure everything. Let me quickly the most important things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can configure to automatically enable the rule set because you can only have on active rule set in AWS SES.&lt;/li&gt;
&lt;li&gt;You define e-mail forwarding rules by specifying your domain name and the e-mail mappings. E-mail mappings define a &lt;code&gt;receivePrefix&lt;/code&gt; which is your e-mail alias and a list of &lt;code&gt;targetEmails&lt;/code&gt;. All e-mails to your alias/prefix are forwarded to these target e-mails. The forwarded e-mails have a prefix of &lt;code&gt;noreply@example.org&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If the domain is managed by Route53, then you can automatically verify the domain. This setting will configure some custom resources to validate the domains.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That&apos;s it already. You can further extend the configuration of course. For example, you can add more e-mail mappings for various aliases/prefixes. Or you can add another domain with individual e-mail mappings. It&apos;s up to you 👍&lt;/p&gt;
&lt;h3&gt;Deploy Your AWS CDK Stack&lt;/h3&gt;
&lt;p&gt;Now you just need to put everything into an AWS CDK stack. Before you start, initialize a new CDK stack and install my CDK constructs:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cdk init app --language=typescript
npm i -D @seeebiii/ses-email-forwarding
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, create a new file that contains your stack. It can look similar to the following example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const app = new cdk.App();

class EmailForwardingSetupStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    new EmailForwardingRuleSet(this, &apos;EmailForwardingRuleSet&apos;, {
      // define your config here
    });
  }
}

new EmailForwardingSetupStack(app, &apos;EmailForwardingSetupStack&apos;, {
  env: {
    account: &apos;&amp;lt;account-id&amp;gt;&apos;,
    region: &apos;&amp;lt;region&amp;gt;&apos;
  }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, you use the &lt;code&gt;cdk deploy&lt;/code&gt; command to deploy the stack. Everything else like verifying your domains and setting up SES will be done for you. Now you can start with serverless sending and receiving e-mails!&lt;/p&gt;
&lt;p&gt;In the end, you&apos;ll have this architecture for serverless receiving e-mails using AWS SES:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-ses-receive-emails-serverless.jpeg&quot; alt=&quot;Architecture how an email is received and forwarded in a serverless way.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;All &lt;strong&gt;1.) incoming e-mails&lt;/strong&gt; are handled by SES. SES will &lt;strong&gt;2.) move them to S3&lt;/strong&gt; and afterwards &lt;strong&gt;3.) invokes a Lambda function&lt;/strong&gt;. This Lambda function &lt;strong&gt;4.) loads the e-mail&lt;/strong&gt; from S3 and &lt;strong&gt;5.) forwards it&lt;/strong&gt; to either Gmail or another target e-mail address. In case you are interested, I have written another blog post about how you can &lt;a href=&quot;/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/&quot;&gt;include AWS Lambda functions inside a CDK construct&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Sending E-Mails with AWS SES&lt;/h2&gt;
&lt;p&gt;Unfortunately this is the remaining step that can&apos;t be automated. Once the above CDK stack is deployed to your AWS Account, you need to &lt;a href=&quot;https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html?icmpid=docs_ses_console&quot;&gt;create SMTP credentials in AWS SES&lt;/a&gt;. These credentials allow you to send e-mails from any e-mail application or provider like Gmail. However, this will only work if you have verified your sender domain in AWS SES. Otherwise your e-mails will only be sent to verified e-mail addresses to avoid that you&apos;re sending spam through AWS SES. If you would like to verify the target e-mail addresses with the CDK construct, just use the setting &lt;code&gt;verifyTargetEmailAddresses&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Other Solutions&lt;/h2&gt;
&lt;p&gt;You may have already asked yourself if there are really no other existing solutions to this problem. I can assure you there are other solutions for sending and receiving e-mails in a serverless way. However, they did not solve my problem as I expected or I&apos;ve discovered them too late. Here are a few alternatives now:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/arithmetric/aws-lambda-ses-forwarder&quot;&gt;aws-lambda-ses-forwarder&lt;/a&gt; -&amp;gt; An NPM package to be used in a Lambda function. It can be triggered by an SES event and forwards e-mails to e.g. Gmail. Unfortunately, only using this library in a Lambda function requires you to setup all the necessary steps by hand. Besides that it offers a really flexible configuration for forwarding e-mails.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://pypi.org/project/aws-cfn-ses-domain/&quot;&gt;aws-cfn-ses-domain&lt;/a&gt; -&amp;gt; CloudFormation custom resources for domain and e-mail verification. This helps if you&apos;re writing your infrastructure in CloudFormation but it&apos;s missing some other pieces, like e-mail handling in general.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/0x4447/0x4447_product_s3_email&quot;&gt;S3 Email&lt;/a&gt; -&amp;gt; A combination of S3 and SES where e-mails are stored on S3 and S3 is used as the &quot;e-mail interface&quot;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://simplelogin.io/&quot;&gt;simplelogin.io&lt;/a&gt; -&amp;gt; An SaaS app to setup various e-mail aliases. You can also &lt;a href=&quot;https://github.com/simple-login/app&quot;&gt;deploy it yourself on AWS&lt;/a&gt; and connect the stack with SES. It&apos;s probably the most user friendly way to solve the user cases I&apos;ve mentioned above. However, I only discovered it after I had implemented most of the things. Also, the self-hosting steps looked like too much work is necessary.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://improvmx.com/&quot;&gt;improvmx.com&lt;/a&gt; -&amp;gt; An SaaS app similar to simplelogin.io. It can create e-mail aliases for you and forward the e-mails to another address like Gmail. It&apos;s pretty similar to what my CDK constructs can do for you. Like the other one, I only discovered this solution after I had implemented most of the things. Unfortunately, there&apos;s no self-hosting version available as far as I know.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I&apos;ve learned a lot on the way of building my first AWS CDK construct! On tricky part was to &lt;a href=&quot;/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/&quot;&gt;bundle a Lambda function inside the AWS CDK Construct&lt;/a&gt; as mentioned above. I&apos;m really happy with the result because I can easily extend the settings for new landing pages or other email aliases. What do you think about my solution? Let me know in the comments below or mention me on &lt;a href=&quot;https://twitter.com/seeebiii&quot;&gt;Twitter&lt;/a&gt;!&lt;/p&gt;
</content:encoded></item><item><title>5 Ways To Bundle a Lambda Function Within an AWS CDK Construct</title><link>https://www.sebastianhesse.de/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/</guid><description>5 ways to bundle Lambda functions in CDK constructs: inline code, separate files, pre-build bundling, NodejsFunction, and Serverless App Repository integration.</description><pubDate>Sat, 16 Jan 2021 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Have you ever tried to publish a CDK construct that was using a Lambda function, for example to create a custom resource or provide a REST API endpoint? It&apos;s relatively easy to publish your construct if your Lambda function is just using the AWS SDK. But it gets more complicated as soon as other dependencies are involved as well. This post will present you five different ways to bundle your Lambda function within a CDK construct and tells you about the advantages and disadvantages of each option.&lt;/p&gt;
&lt;h2&gt;Problem Context&lt;/h2&gt;
&lt;p&gt;Recently I started digging more into the &lt;a href=&quot;https://aws.amazon.com/cdk/&quot;&gt;AWS CDK&lt;/a&gt; world and wanted to build a simple single page application. I had some special requirement for which I couldn&apos;t find an existing CDK construct available in TypeScript. So I thought let&apos;s create it myself and publish it later to NPM. I wanted to include a Lambda function in this CDK construct that was using an external dependency apart from the AWS SDK. &lt;strong&gt;The problem is&lt;/strong&gt; that if you require other dependencies in an AWS Lambda function, you need to bundle them with your function (the AWS SDK is always available for Node.js runtimes). This means, you have to create an artifact that includes all required dependencies. However, I wanted to avoid that my resulting CDK construct package gets too big. I had some ideas in my mind but also asked &lt;a href=&quot;https://twitter.com/seeebiii/status/1348716172649361408?s=20&quot;&gt;CDK experts on Twitter&lt;/a&gt; for their opinion and experiences. Below are the results of my ideas and their suggestions!&lt;/p&gt;
&lt;p&gt;You have another idea how to &lt;strong&gt;include a Lambda function in a CDK construct&lt;/strong&gt;? Please comment below ⬇️ or let me know &lt;a href=&quot;https://twitter.com/seeebiii&quot;&gt;via Twitter @seeebiii&lt;/a&gt;. If you&apos;re curious about developing AWS Lambda functions in general, I can recommend you my article about &lt;a href=&quot;/2020/03/31/going-serverless-why-and-how-2/&quot;&gt;best practices developing AWS Lambda functions&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Inline Code in CDK Construct&lt;/h2&gt;
&lt;p&gt;The easiest solution is to write some inline code within the CDK code. It usually looks like this when using the &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-lambda.Function.html&quot;&gt;&lt;code&gt;Function&lt;/code&gt;&lt;/a&gt; construct from the &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html&quot;&gt;&lt;code&gt;@aws-cdk/aws-lambda&lt;/code&gt;&lt;/a&gt; package:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;new Function(this, &apos;MyFunction&apos;, {
  handler: &apos;index.handler&apos;,
  code: Code.fromInline(`
    exports.handler = async (event) =&amp;gt; {
      console.log(&apos;event: &apos;, event)
    };
  `),
  runtime: Runtime.NODEJS_12_X
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This code will create a Lambda function with a very basic implementation. You can of course extend it further. However, you are a bit limited in terms of which dependencies you can include. For example, you can only refer to external dependencies like the AWS SDK and regular Node.js modules like &lt;code&gt;path&lt;/code&gt; or &lt;code&gt;fs&lt;/code&gt;. Also, this approach only works for runtimes that interpret text files, like Python or Node.js. It does not work for languages like Java.&lt;/p&gt;
&lt;h3&gt;Advantages:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Quick and easy way to write a Lambda function&lt;/li&gt;
&lt;li&gt;No extra bundle steps necessary&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Disadvantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You are very limited what your function can do&lt;/li&gt;
&lt;li&gt;No IDE support while writing your code&lt;/li&gt;
&lt;li&gt;No testing possible, only manual tests&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Separate File(s) in CDK Construct&lt;/h2&gt;
&lt;p&gt;Instead of providing inline code, you can also move your Lambda function code outside of the CDK code into a separate file. Then, you just link to your file from the CDK construct your using. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;new Function(this, &apos;MyFunction&apos;, {
  runtime: Runtime.NODEJS_12_X,
  handler: &apos;index.handler&apos;,
  code: Code.fromAsset(`${path.resolve(__dirname)}/lambda`)
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;code&lt;/code&gt; property is referencing an external asset which points to the file of your Lambda function. It assumes the following folder structure:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;root/
 - my-stack.js
 - lambda/
   - index.js
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When deploying a stack with this function code, the CDK will simply take the &lt;code&gt;index&lt;/code&gt; file of your Lambda function and use it as your Lambda function&apos;s code. You can do something similar using other runtimes like Java or Python. Just reference the appropriate artifact like a JAR or Python file. Although this approach is much more preferred than writing inline code, it still has the drawback that you can not simply include other dependencies apart from the AWS SDK, at least for Node.js. You could of course zip your &lt;code&gt;index.js&lt;/code&gt; file together with your &lt;code&gt;node_modules&lt;/code&gt; folder and use that as your artifact. However, this approach is &lt;strong&gt;not recommended&lt;/strong&gt; because it unnecessarily slows down your Lambda function due to a bigger artifact size. You&apos;re just carrying around code which you&apos;re not using.&lt;/p&gt;
&lt;h3&gt;Advantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Keep CDK stack code and Lambda function code separated&lt;/li&gt;
&lt;li&gt;You can test your Lambda function&apos;s code using automated tests&lt;/li&gt;
&lt;li&gt;IDE support while writing your Lambda function&apos;s code&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Disadvantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;External dependencies (apart from AWS SDK) not supported&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Bundle Lambda Function Before Publishing&lt;/h2&gt;
&lt;p&gt;If you want to use other external dependencies, you need to make sure that those dependencies are available when your Lambda function is executed. Therefore, the next logical step is to bundle your Lambda function&apos;s code and generate a code artifact with all the dependencies included. This artifact is used by your CDK construct and in the end used by the users of your construct. In your CDK construct you still use the same &lt;code&gt;Function&lt;/code&gt; definition as above where you include the code asset. However, you have to make sure to bundle your code before you publish your CDK construct to any registry like NPM. For example, if you&apos;re writing a TypeScript Lambda function, you can use &lt;a href=&quot;https://esbuild.github.io/&quot;&gt;esbuild&lt;/a&gt; (or webpack or similar) to compile and bundle it to &quot;native&quot; Node.js code that your Lambda function understands:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;esbuild lambda/index.ts --bundle --platform=node --target=node12 --external:aws-sdk --outfile=lambda/build/index.js
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command creates an &lt;code&gt;index.js&lt;/code&gt; file with all dependencies included, except the AWS SDK since this is already provided by the Lambda runtime. If you want to speed up your Lambda function even more, you can append &lt;code&gt;--minify&lt;/code&gt; to use minification and reduce the output size. Here the output file is created under &lt;code&gt;lambda/build&lt;/code&gt;, so take care to adjust the &lt;code&gt;Function&lt;/code&gt; definition. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;new Function(this, &apos;MyFunction&apos;, {
  runtime: Runtime.NODEJS_12_X,
  handler: &apos;index.handler&apos;,
  code: Code.fromAsset(`${path.resolve(__dirname)}/lambda/build`)
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to include the Lambda function&apos;s code in your published CDK construct, consider the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Make sure the &lt;code&gt;lambda/build&lt;/code&gt; folder is not ignored by NPM (this is usually configured in &lt;code&gt;.npmignore&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Before publishing the construct, you have to bundle the Lambda function first - otherwise your published construct is missing the code for your Lambda function&lt;/li&gt;
&lt;li&gt;If you are using &lt;code&gt;projen&lt;/code&gt; to configure your construct project, you can use the following code to execute any command before building your construct:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;const construct = new AwsCdkConstructLibrary(...)

// append command execution
construct.buildTask.exec(...)

// prepend command execution
construct.buildTask.prependExec(...)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;💡If you don&apos;t know what &lt;code&gt;projen&lt;/code&gt; is, take a look at my &lt;a href=&quot;https://github.com/seeebiii/projen-test&quot;&gt;step-by-step getting started tutorial about projen and jsii&lt;/a&gt;. I can recommend to check it out! Maybe you even want to &lt;a href=&quot;/2021/03/01/migrating-a-cdk-construct-to-projen-and-jsii/&quot;&gt;migrate your existing CDK construct&lt;/a&gt; to it?&lt;/p&gt;
&lt;h3&gt;Advantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Everything from the previous section about having separate files&lt;/li&gt;
&lt;li&gt;You don&apos;t make any assumptions about the environment of the users that use your construct (will be important in the following sections)&lt;/li&gt;
&lt;li&gt;You throw out all unnecessary code by only bundling the relevant code and maybe even minifying it&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Disadvantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;It takes another build step and slightly increases the size of your CDK construct&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Bundle Lambda Function Before Deploying&lt;/h2&gt;
&lt;p&gt;Instead of bundling the code before publishing your CDK construct, you can also bundle your Lambda function code before the construct is deployed to AWS. The AWS CDK provides a construct for Node.js Lambda functions called &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-lambda-nodejs.NodejsFunction.html&quot;&gt;&lt;code&gt;NodejsFunction&lt;/code&gt;&lt;/a&gt; from the &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-nodejs-readme.html&quot;&gt;&lt;code&gt;@aws-cdk/aws-lambda-nodejs&lt;/code&gt; package&lt;/a&gt;. This construct will build the Lambda function as soon as your CDK construct is deployed within a stack. The &lt;code&gt;NodejsFunction&lt;/code&gt; construct is using esbuild to do that or a Docker container if esbuild is not available (&lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-lambda-nodejs.NodejsFunction.html&quot;&gt;read more about it in the documentation&lt;/a&gt;). Using it in your construct is similar to how you define a regular &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-lambda.Function.html&quot;&gt;&lt;code&gt;Function&lt;/code&gt;&lt;/a&gt; - however, it already defines some useful defaults. An example definition can look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;new NodejsFunction(this, &apos;MyFunction&apos;, {
  entry: `${path.resolve(__dirname)}/lambda/index.js`
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, it&apos;s pretty simple and short. Unfortunately the big disadvantage is that you make assumptions about the environment where your construct is deployed. If they don&apos;t have esbuild or Docker available, it won&apos;t work. Therefore it only makes sense to use &lt;code&gt;NodejsFunction&lt;/code&gt; in an constructs where you control the environment or if you let your users know about this requirement.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Take care which file you&apos;re referencing for your Lambda function. If you&apos;re using TypeScript, then you need to reference &lt;code&gt;index.ts&lt;/code&gt; for local execution of e.g. tests. However, if someone is using your construct, they usually won&apos;t have the TypeScript files available but only the compiled JavaScript files. The following code snippet can help you:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import * as fs from &apos;fs&apos;;
import * as path from &apos;path&apos;;

const lambdaFile = &apos;lambda/index&apos;;
const extension = fs.existsSync(lambdaFile + &apos;.ts&apos;) ? &apos;.ts&apos; : &apos;.js&apos;;
const entry = path.join(__dirname, `${lambdaFile}${extension}`)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Advantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Everything from the previous section about having separate files&lt;/li&gt;
&lt;li&gt;You throw out all unnecessary code by only bundling the relevant code and maybe even minifying it&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Disadvantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You make assumptions about the environment of the users of your construct&lt;/li&gt;
&lt;li&gt;It takes another build step and slightly increases the size of your CDK construct&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Publish Lambda Function to Serverless Application Repository or Using Docker&lt;/h2&gt;
&lt;p&gt;A completely different option compared to the ones above is to use the Serverless Application Repository. It&apos;s a repository for serverless application that you can build and &lt;a href=&quot;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-publishing-applications.html&quot;&gt;publish to AWS using the SAM CLI&lt;/a&gt;. Then you can use this application in other stacks using the &lt;a href=&quot;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-application.html&quot;&gt;CloudFormation SAM type &lt;code&gt;AWS::Serverless::Application&lt;/code&gt;&lt;/a&gt;. The CDK equivalent is &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-sam.CfnApplication.html&quot;&gt;CfnApplication&lt;/a&gt; from the &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/aws-sam-readme.html&quot;&gt;@aws-cdk/aws-sam&lt;/a&gt; package. Since those applications can be made public to everyone, you have a neat way to host your Lambda function outside of your CDK construct, i.e. without bundling it inside your CDK construct. You could even &lt;a href=&quot;https://docs.aws.amazon.com/serverlessrepo/latest/devguide/sharing-lambda-layers.html&quot;&gt;share your AWS Lambda Layer in the same way&lt;/a&gt; and reference that instead of a full serverless application (see &lt;a href=&quot;https://levelup.gitconnected.com/blog-md-9bd47be8b3ad&quot;&gt;how to use Layers in AWS CDK here&lt;/a&gt;). This has the advantage that you can still use the &lt;code&gt;Function&lt;/code&gt; construct as explained above and just add a &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-lambda.LayerVersion.html&quot;&gt;&lt;code&gt;LayerVersion&lt;/code&gt; construct&lt;/a&gt;. Similarly, you can &lt;a href=&quot;https://aws.amazon.com/blogs/compute/using-container-image-support-for-aws-lambda-with-aws-sam/&quot;&gt;publish your Lambda function as a container nowadays&lt;/a&gt; and reference that in your CDK construct. The CDK provides a &lt;a href=&quot;https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#docker-images&quot;&gt;&lt;code&gt;DockerImageFunction&lt;/code&gt;&lt;/a&gt; for this case.&lt;/p&gt;
&lt;p&gt;Although these options sound like a good idea because you are much more flexible in how your Lambda function is built, the solution has two disadvantages: First, you are referencing an unknown external stack or dependency that you should make your users aware of so they can verify it. Second, it adds much more complexity than often necessary. Especially if you&apos;re using a Node.js runtime, none of these step should be necessary for bundling a Lambda function within your CDK construct. It&apos;s much easier to use one of the other options above.&lt;/p&gt;
&lt;h3&gt;Advantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Separation of concerns&lt;/li&gt;
&lt;li&gt;Flexibility of which dependencies you need and how you provide them&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Disadvantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;More complexity&lt;/li&gt;
&lt;li&gt;Potential concerns by users&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In most cases having separate files is totally enough. Especially if you just use the AWS SDK in your Lambda function, there&apos;s often no need to over-optimize your code. However, as soon as you use other external dependencies, you have to do more to bundle a Lambda function within a CDK construct. My recommended approach is to bundle your Lambda function &lt;strong&gt;before&lt;/strong&gt; you publish your CDK construct. Then, in your CDK construct just use the bundled artifact. As mentioned, the advantage is that you don&apos;t make any assumptions about the (build) environments that the users of your CDK constructs have.&lt;/p&gt;
</content:encoded></item><item><title>Automatically Generate a Nice Looking Serverless REST API Documentation</title><link>https://www.sebastianhesse.de/2020/08/27/automatically-generate-nice-looking-serverless-rest-api-documentation/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2020/08/27/automatically-generate-nice-looking-serverless-rest-api-documentation/</guid><description>Generate beautiful REST API documentation from OpenAPI specs using ReDoc, openapi-generator, or swagger-codegen. Automate docs for serverless AWS Lambda APIs.</description><pubDate>Thu, 27 Aug 2020 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In the past years, technology made huge progress and automating your processes is more important than ever. Documenting your REST APIs is not an exception here. It&apos;s even more important in the fast moving serverless world to automate your serverless REST API documentation to always keep it up-to-date. The &lt;a href=&quot;https://www.openapis.org/&quot;&gt;OpenAPI (Swagger)&lt;/a&gt; standard helps you describing your REST APIs in a consistent and machine-readable format. This blog post describes the basic steps and explains how you can generate a nice looking serverless REST API documentation!&lt;/p&gt;
&lt;p&gt;The goal of this blog post is to build an automated process for creating and describing your serverless REST API. We&apos;ll use an OpenAPI document for describing your endpoints and then generate a nice looking documentation out of it. All examples are based on my most recent side project &lt;a href=&quot;https://saas-marketplaces.com&quot;&gt;saas-marketplaces.com&lt;/a&gt; where I&apos;ve used the same approach. You can checkout the results under &lt;a href=&quot;https://dev.saas-marketplaces.com&quot;&gt;dev.saas-marketplaces.com&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This blog post is using the &lt;a href=&quot;https://aws.amazon.com/serverless/sam/&quot;&gt;Serverless Application Model (SAM)&lt;/a&gt; by AWS to describe the serverless functions on AWS Lambda. There are similar approaches available for other frameworks, e.g. using the &lt;a href=&quot;https://www.serverless.com/&quot;&gt;Serverless framework&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Creating A Serverless REST API&lt;/h2&gt;
&lt;p&gt;To start off, we&apos;ll define a new serverless function using SAM. The function is using the OpenAPI 3.0 specification for the API definition. It&apos;s returning a list of SaaS Marketplaces:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AWSTemplateFormatVersion: &apos;2010-09-09&apos;
Transform: AWS::Serverless-2016-10-31
Description: &amp;gt;
  A simple serverless function using OpenAPI spec.

Resources:
  # An API definition for multiple serverless functions
  DefaultApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      DefinitionBody:
        &apos;Fn::Transform&apos;:
          Name: &apos;AWS::Include&apos;
          Parameters:
            Location: !Sub s3://${TemplateBucket}/spec/backend-api-spec.yaml

  GetMarketplacesFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: dist/
      Handler: index.handler
      Runtime: nodejs12.x
      Events:
        GetMarketplaces:
          Type: Api
          Properties:
            RestApiId: !Ref DefaultApi
            Path: /api/1/marketplaces
            Method: GET
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are two important definitions here. One is the &lt;code&gt;DefaultApi&lt;/code&gt; resource which references a &lt;code&gt;backend-api-spec.yaml&lt;/code&gt; in its &lt;code&gt;DefinitionBody&lt;/code&gt;. The &lt;code&gt;backend-api-spec.yaml&lt;/code&gt; file is used to describe the REST API and also to generate the documentation. The other important definition is the line where we reference &lt;code&gt;DefaultApi&lt;/code&gt; in the serverless function using &lt;code&gt;RestApiId: !Ref DefaultApi&lt;/code&gt;. These are the main &lt;em&gt;ingredients&lt;/em&gt; for a serverless REST API and its documentation using OpenAPI (Swagger). If you have questions about the other elements, I advise you to read through the &lt;a href=&quot;https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md&quot;&gt;comprehensive SAM specification&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Describe Your Serverless REST API Using OpenAPI&lt;/h2&gt;
&lt;p&gt;The next step is to write our OpenAPI document &lt;code&gt;backend-api-spec.yaml&lt;/code&gt;. Such a document consists of some meta information (under &lt;code&gt;info&lt;/code&gt;), the different endpoints identified by their path (under &lt;code&gt;paths&lt;/code&gt;) and other definitions like a response model (under &lt;code&gt;components&lt;/code&gt;). We&apos;ll focus on the &lt;code&gt;paths&lt;/code&gt; definitions for now.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;openapi: &quot;3.0.1&quot;
info:
  version: &quot;1&quot;
  title: &quot;Serverless REST API Documentation&quot;

paths:
  /api/1/marketplaces:
    get:
      summary: &quot;Get all marketplaces&quot;
      operationId: getMarketplaces
      responses:
        200:
          description: Successful response
          content:
            application/json:
              schema:
                $ref: &quot;#/components/schemas/Marketplaces&quot;
      x-amazon-apigateway-integration:
        responses:
          default:
            statusCode: 200
        uri:
          Fn::Sub: &quot;arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${GetMarketplacesFunction.Arn}/invocations&quot;
        passthroughBehavior: when_no_match
        httpMethod: POST
        type: aws_proxy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Multiple lines are important here. First, you define some meta information in the beginning of the file. Then, you define all endpoints that your REST API supports. They are denoted by a path (e.g. &lt;code&gt;/api/1/marketplaces/&lt;/code&gt;) and a method (e.g. &lt;code&gt;get&lt;/code&gt;). Each path + method combination is considered as an endpoint that you can describe further. For example, you can provide meta information like a &lt;code&gt;summary&lt;/code&gt; but also the different &lt;code&gt;response&lt;/code&gt;s a client can expect. In this case we&apos;re &lt;em&gt;referencing&lt;/em&gt; a &lt;code&gt;schema&lt;/code&gt; describing the JSON response data. I&apos;m skipping the response model here as this is not the most interesting point.&lt;/p&gt;
&lt;p&gt;The OpenApi spec also allows adding extensions denoted by &lt;code&gt;x-&lt;/code&gt;. These are custom extensions for any kind of provider for your REST API schema. Now, you want to create a REST API with AWS Api Gateway and hence, you need to check what kind of &lt;a href=&quot;https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html&quot;&gt;extensions the Api Gateway supports&lt;/a&gt;. For example, &lt;code&gt;x-amazon-apigateway-integration&lt;/code&gt; defines what should happen when a request comes in to the Api Gateway for a &lt;code&gt;GET /api/1/marketplaces&lt;/code&gt; request. In this case it&apos;ll forward the request to a Lambda function referenced by the ARN. Here, the &lt;code&gt;type&lt;/code&gt; property defines the type of integration, in this case &lt;code&gt;aws_proxy&lt;/code&gt; which forwards the complete HTTP event to the Lambda function.&lt;/p&gt;
&lt;h2&gt;Deploy Your Stack&lt;/h2&gt;
&lt;p&gt;The first part is ready. Assuming you have written some code for the Lambda function, you can now deploy it to AWS. The following steps are required:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Upload the &lt;code&gt;backend-api-spec.yaml&lt;/code&gt; file to an S3 bucket.&lt;/li&gt;
&lt;li&gt;Package the SAM template using &lt;code&gt;aws cloudformation package&lt;/code&gt; or &lt;code&gt;sam package&lt;/code&gt; if you have SAM CLI installed.&lt;/li&gt;
&lt;li&gt;Deploy a stack using the SAM template and &lt;code&gt;aws cloudformation deploy&lt;/code&gt; or &lt;code&gt;sam deploy&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Afterwards, you can access your REST API under the url &lt;code&gt;&amp;lt;YourApiGatewayUrl&amp;gt;/api/1/marketplaces&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Generate a Nice Looking Serverless REST API Documentation&lt;/h2&gt;
&lt;p&gt;Until now, you haven&apos;t generated a nice looking serverless REST API documentation. And as you might have recognized, you shouldn&apos;t use your current OpenAPI document for it. The reason is that it still includes internal information like Lambda function ARNs. Instead, we need to clean the documents content first.&lt;/p&gt;
&lt;h3&gt;Prepare The OpenAPI Document&lt;/h3&gt;
&lt;p&gt;I have written a custom Node.js script for my own purposes. It reads the &lt;code&gt;backend-api-spec.yaml&lt;/code&gt;, removes the internal data and generates a new file &lt;code&gt;backend-api.yaml&lt;/code&gt;. The generated file will serve as our public file which can be published on the internet or used for generating documentation (see next section). You can start with a simplified version of my script and extend it to your own needs:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const fs = require(&apos;fs&apos;);
const jsYaml = require(&apos;js-yaml&apos;);

// Define the path to the backend-api-spec.yaml and the target file backend-api.yaml
const inputSpec = backend-api-spec.yaml;
const targetSpec = backend-api.yaml;

// Load OpenAPI document
const spec = jsYaml.safeLoad(fs.readFileSync(inputSpec, &apos;utf8&apos;));

// remove all Api Gateway extensions to not reveal the internal AWS details
function removeApiGatewayExtensions(spec) {
    Object.keys(spec).forEach(property =&amp;gt; {
        if (property.indexOf(&apos;x-amazon-apigateway&apos;) &amp;gt; -1) {
            delete spec[property];
        } else if (typeof spec[property] == &quot;object&quot;) {
            removeApiGatewayExtensions(spec[property]);
        }
    });
}

removeApiGatewayExtensions(spec);

// Potential extension: Adjust more things in the OpenAPI document, e.g. base url

// After correcting all data, write it back to target file
fs.writeFileSync(targetSpec, jsYaml.safeDump(spec));
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Generate Documentation With Different Tools&lt;/h3&gt;
&lt;p&gt;Now you can use the output file and generate a nice looking serverless REST API documentation. There are various tools for this use case, like &lt;a href=&quot;https://github.com/OpenAPITools/openapi-generator&quot;&gt;openapi-generator&lt;/a&gt;, &lt;a href=&quot;https://github.com/swagger-api/swagger-codegen&quot;&gt;swagger-codegen&lt;/a&gt;, or &lt;a href=&quot;https://github.com/Redocly/redoc&quot;&gt;redoc&lt;/a&gt;. (Note: &lt;code&gt;openapi-generator&lt;/code&gt; and &lt;code&gt;swagger-codegen&lt;/code&gt; are pretty similar and I can recommend reading &lt;a href=&quot;https://openapi-generator.tech/docs/faq/#what-is-the-difference-between-swagger-codegen-and-openapi-generator&quot;&gt;the FAQ about their difference&lt;/a&gt;) &lt;code&gt;redoc&lt;/code&gt; is a special case here because it only focuses on &lt;strong&gt;generating documentation&lt;/strong&gt; whereas the other two can also &lt;strong&gt;generate server and client code&lt;/strong&gt;. Generating client code is a really cool feature if you&apos;re providing a (public) API and don&apos;t want to (or can&apos;t?) write the code by yourself. Here are the steps to generate the documentation using each of the three tools.&lt;/p&gt;
&lt;h4&gt;openapi-generator&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;# Install it on your machine as a CLI command, e.g. on MacOS using Homebrew:
brew install openapi-generator

# Generates the documentation at openapi-generator-docs/index.html
openapi-generator generate -i backend-api.yaml -g html2 -o openapi-generator-docs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/openapi-documentation-example.png&quot; alt=&quot;A nice looking serverless REST API documentation using openapi-generator&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Resulting HTML document using openapi-generator&lt;/p&gt;
&lt;h4&gt;swagger-codegen&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;# Install it on your machine as a CLI command, e.g. on MacOS using Homebrew:
brew install swagger-codegen

# Generates the documentation at swagger-codegen-docs/index.html
swagger-codegen generate -i backend-api.html -l html2 -o swagger-codegen-docs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that the parameter for specifying the generator template is different between &lt;code&gt;openapi-generator&lt;/code&gt; and &lt;code&gt;swagger-codegen&lt;/code&gt;. &lt;code&gt;openapi-generator&lt;/code&gt; requires &lt;code&gt;-g&lt;/code&gt; (for &lt;strong&gt;g&lt;/strong&gt;enerator) and &lt;code&gt;swagger-codegen&lt;/code&gt; requires &lt;code&gt;-l&lt;/code&gt; (for &lt;strong&gt;l&lt;/strong&gt;anguage).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/swagger-documentation-example.png&quot; alt=&quot;A nice looking serverless REST API documentation using swagger-codegen&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Resulting HTML document using swagger-codegen&lt;/p&gt;
&lt;h4&gt;redoc&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;# Install it on your machine as a CLI command, e.g. on MacOS using npm
npm i -g redoc-cli

# Generates the documentation at redoc-docs/index.html
redoc-cli bundle backend-api.yaml -o redoc-docs/index.html
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/redoc-api-documentation-example.png&quot; alt=&quot;A nice looking serverless REST API documentation using redoc&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Resulting HTML document using redoc&lt;/p&gt;
&lt;h3&gt;Next steps&lt;/h3&gt;
&lt;p&gt;The final step is to use the generated HTML file and integrate it into your documentation section. If you already have some developer documentation for your app or software, you can simply take the resulting HTML file and copy it to the appropriate location. In my case, I&apos;m uploading it to S3 and put CloudFront in front of it to cache the content. Then it&apos;s available under &lt;a href=&quot;http://dev.saas-marketplaces.com&quot;&gt;dev.saas-marketplaces.com&lt;/a&gt;. As a side note: I&apos;m managing my stack using CloudFormation and also &lt;a href=&quot;/2018/02/03/creating-different-aws-cloudformation-environments/&quot;&gt;generate different environments for production and development&lt;/a&gt;. This helps separating all the things and lets me try out changes first.&lt;/p&gt;
&lt;p&gt;As you can see, the generated HTML files of &lt;code&gt;openapi-generator&lt;/code&gt; and &lt;code&gt;swagger-codegen&lt;/code&gt; look really similar. As you can read in the FAQ linked above, the tools are similar due to their history and hence, their results and usage are similar. In the end I chose &lt;code&gt;redoc&lt;/code&gt; because I preferred their styling. Also, I didn&apos;t want a custom branding at the moment. However, the other two tools are still a very good choice if you want to generate your client libraries as well.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Generating a &lt;strong&gt;nice looking documentation for OpenAPI specifications&lt;/strong&gt; is easy nowadays. Tools like &lt;code&gt;openapi-generator&lt;/code&gt;, &lt;code&gt;swagger-codegen&lt;/code&gt; or &lt;code&gt;redoc&lt;/code&gt; are doing a great job! This is a &lt;strong&gt;great support for your serverless functions&lt;/strong&gt; and their documentation. Unfortunately the time to setup such a process takes time. Also, if you&apos;re not comfortable with the tools yet, you have to learn quite a bit before every step can be automated. I really like the end result now and my next step is to generate the client libraries now.&lt;/p&gt;
&lt;p&gt;Have you ever worked with any of the tools? Let me know your experiences or best practices, especially in the context of serverless functions.&lt;/p&gt;
</content:encoded></item><item><title>Using DynamoDB Local and Testcontainers in Java within Bitbucket Pipelines</title><link>https://www.sebastianhesse.de/2020/05/14/using-dynamodb-local-and-testcontainers-in-java-within-bitbucket-pipelines/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2020/05/14/using-dynamodb-local-and-testcontainers-in-java-within-bitbucket-pipelines/</guid><description>Automate DynamoDB testing with Testcontainers and DynamoDB Local in Bitbucket Pipelines. Complete setup guide including Ryuk configuration and AWS SDK settings.</description><pubDate>Thu, 14 May 2020 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Running cloud services on your local machine is often a problem because there is no local version available. Thankfully DynamoDB provides a local version of their database. This makes unit testing cloud services a lot easier if you&apos;re relying on DynamoDB. Unfortunately, setting up DynamoDB Local and combining it with Testcontainers and Bitbucket Pipelines in your automated tests can lead to some headache. This blog post explains all required steps with Java and helps with typical pitfalls.&lt;/p&gt;
&lt;p&gt;All the code is available in a Bitbucket repository providing various examples how to combine the approaches below. Just go to &lt;a href=&quot;https://bitbucket.org/sebastianhesse/java-dynamodb-local-automated-testing&quot;&gt;https://bitbucket.org/sebastianhesse/java-dynamodb-local-automated-testing&lt;/a&gt; and have a look 😊 This blog post is separated into the following parts:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;#setup-dynamodb&quot;&gt;Setting up DynamoDB Local&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#automatically-start-dynamodb&quot;&gt;Automatically Start DynamoDB Local Before Running the Tests&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#integrating-spring&quot;&gt;Optional: Integrating Spring Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#executing-on-bitbucket-pipelines&quot;&gt;Executing the Tests on Bitbucket Pipelines&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Setting up DynamoDB Local&lt;/h2&gt;
&lt;p&gt;First, you need to setup &lt;a href=&quot;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html&quot;&gt;DynamoDB Local&lt;/a&gt;. It&apos;s a local version of the popular AWS database that you can install on your local machine. You can choose between a &quot;real&quot; local installation, a Maven dependency or a Docker image. All options will start a local instance of DynamoDB and you can connect to it using the AWS SDK. For the next sections the &lt;a href=&quot;https://hub.docker.com/r/amazon/dynamodb-local/&quot;&gt;DynamoDB Local Docker image&lt;/a&gt; is used. Running it locally only requires this command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;docker run -p 8000:8000 amazon/dynamodb-local
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It spins up a Docker container and makes DynamoDB available on port &lt;code&gt;8000&lt;/code&gt;. Then, you only need to adapt the DynamoDB client from the AWS SDK and point it to the localhost endpoint:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AwsClientBuilder.EndpointConfiguration endpointConfig =
AwsClientBuilder.EndpointConfiguration(&quot;http://localhost:8000&quot;,
    &quot;us-west-2&quot;);
AmazonDynamoDB dynamodb = AmazonDynamoDBClientBuilder.standard()
    .withEndpointConfiguration(endpointConfig)
    .build();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you want, you can further customize DynamoDB Local as described in the &lt;a href=&quot;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.UsageNotes.html&quot;&gt;usage notes&lt;/a&gt;. For example, you can change the port.&lt;/p&gt;
&lt;h2&gt;Automatically Start DynamoDB Local Before Running The Tests&lt;/h2&gt;
&lt;p&gt;Since you don&apos;t want to start a Docker container each time you run a unit test, you need to automate this step. For this case, &lt;a href=&quot;https://www.testcontainers.org/&quot;&gt;Testcontainers&lt;/a&gt; is a really good library that can be integrated into your automated tests. It allows you starting Docker containers, e.g. before running a JUnit test. To start a DynamoDB Local instance, simply add the following code to your tests:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// This declaration will start a DynamoDB Local Docker container before the unit tests run -&amp;gt;
// make sure to specify the exposed port as 8000, otherwise the port
// mapping will be wrong -&amp;gt; see the docs: https://hub.docker.com/r/amazon/dynamodb-local
@ClassRule
public static GenericContainer dynamoDBLocal =
    new GenericContainer(&quot;amazon/dynamodb-local:1.11.477&quot;)
        .withExposedPorts(8000);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This code is using JUnit 4&apos;s &lt;code&gt;@ClassRule&lt;/code&gt; annotation to trigger the Testcontainers start process. Testcontainers will automatically start the given Docker container by choosing a free port on your system and mapping it to &lt;code&gt;8000&lt;/code&gt; (DynamoDB Local&apos;s default port). Since the free port can be different each time you trigger this code, you need to retrieve the port. The following code allows you to build the local endpoint url for accessing the local DynamoDB instance:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;private String buildEndpointUrl() {
    return &quot;http://localhost:&quot; + dynamoDBLocal.getFirstMappedPort();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then use the endpoint url to adapt the AWS SDK configuration as described above and run your tests. You can find a fully working example of this test in the &lt;a href=&quot;https://bitbucket.org/sebastianhesse/java-dynamodb-local-automated-testing/src/master/src/test/java/de/sebastianhesse/examples/regular/DynamoDBLocalTest.java&quot;&gt;DynamoDBLocalTest.java&lt;/a&gt; file in my Bitbucket repository.&lt;/p&gt;
&lt;h2&gt;Optional: Integrating Spring Framework&lt;/h2&gt;
&lt;p&gt;In the Java world a lot of people are using the Spring Framework. Not only for managing dependencies but also for many other things to reduce boilerplate code. If you use Spring, testing your Spring related code is necessary as well. However, when combining Spring with DynamoDB Local and Testcontainers you have to clean the database each time a new test runs. To save you some time, here&apos;s the trick how to do it:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@RunWith(SpringJUnit4ClassRunner.class)
@DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_CLASS)
public class YourTestClass {
    
    // specify DynamoDB Local using Testcontainers
    @ClassRule public static GenericContainer dynamoDBLocal =
        new GenericContainer(&quot;amazon/dynamodb-local:1.11.477&quot;)
            .withExposedPorts(8000);

    // Your test code follows here ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Spring Test module offers a &lt;code&gt;@DirtiesContext&lt;/code&gt; annotation that marks a Spring context as &lt;em&gt;dirty&lt;/em&gt;. Marking it as &lt;em&gt;dirty&lt;/em&gt; will reload the Spring context and thus, starts a new Docker container of DynamoDB Local. Marking it as dirty can happen after each class or method run or even on a package level. Just have a look at the &lt;a href=&quot;https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/test/annotation/DirtiesContext.html&quot;&gt;JavaDocs of @DirtiesContext&lt;/a&gt; for further information.&lt;/p&gt;
&lt;p&gt;This is really helpful when dealing with Spring in your tests. If you have other ideas how to solve that, please leave a comment below 💬&lt;/p&gt;
&lt;h2&gt;Executing the Tests on Bitbucket Pipelines&lt;/h2&gt;
&lt;p&gt;Now we&apos;re getting to the most interesting part: put everything together and run it on &lt;a href=&quot;https://bitbucket.org/product/features/pipelines&quot;&gt;Bitbucket Pipelines&lt;/a&gt;. Before we do that, we need to configure Bitbucket Pipelines using a file called &lt;code&gt;bitbucket-pipelines.yml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# You can change the image but make sure it supports Docker, Java and Maven.
image: k15t/bitbucket-pipeline-build-java:2020-01-09

pipelines:
  default:
    - step:
        caches:
          - maven
          - docker
        script:
          # this is the Maven command to be executed
          - mvn -B verify

options:
  docker: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The configuration above defines a default pipeline that runs on each commit. It&apos;s using a Maven and Docker cache, so the execution takes less time except the first time it runs. One important option is to enable Docker by using &lt;code&gt;docker: true&lt;/code&gt;. It allows you &lt;a href=&quot;https://confluence.atlassian.com/bitbucket/run-docker-commands-in-bitbucket-pipelines-879254331.html&quot;&gt;running Docker commands&lt;/a&gt; within Bitbucket Pipelines. This is necessary for Testcontainers to start a Docker container.&lt;/p&gt;
&lt;h3&gt;Disabling Ryuk&lt;/h3&gt;
&lt;p&gt;If you run this setup the first time, you&apos;ll likely encounter the following error message:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[testcontainers-ryuk] WARN  org.testcontainers.utility.ResourceReaper - Can not connect to Ryuk at localhost:32768
java.net.ConnectException: Connection refused (Connection refused)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ryuk is a service to clean up the containers after they&apos;ve been used. Since Bitbucket Pipelines will throw away the pipeline environment after some time, you can &lt;a href=&quot;https://www.testcontainers.org/features/configuration/&quot;&gt;disable this feature&lt;/a&gt;. Therefore, in your Bitbucket Pipelines environment settings just set &lt;code&gt;TESTCONTAINERS_RYUK_DISABLED&lt;/code&gt; to &lt;code&gt;TRUE&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Proper Setup of AWS SDK&lt;/h3&gt;
&lt;p&gt;After disabling Ryuk, run the Pipeline again. You&apos;ll see another error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: To use assume role profiles the aws-java-sdk-sts module must be on the class path., com.amazonaws.auth.profile.ProfileCredentialsProvider@3c443976: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@4ee33af7: Failed to connect to service endpoint: ]
	at de.sebastianhesse.examples.regular.DynamoDBLocalTest.createCreateDynamoDBTable(DynamoDBLocalTest.java:118)
	at de.sebastianhesse.examples.regular.DynamoDBLocalTest.connectionSuccessful(DynamoDBLocalTest.java:48)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This error tells you that the AWS SDK is not able to load any AWS credentials. You might wonder why this is necessary because you&apos;re connecting to a &lt;strong&gt;local&lt;/strong&gt; DynamoDB 🤔 Well, even though you&apos;re only opening a local connection and you don&apos;t need credentials, the AWS SDK does not care about it. Thus, you need to properly &lt;a href=&quot;https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html&quot;&gt;configure the AWS SDK&lt;/a&gt;. However, it&apos;s enough to add some dummy data for &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_KEY&lt;/code&gt; environment variables and run the Pipeline again. Now, you&apos;ll most likely see another error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a similar problem like the previous one. Even though you can use any region for connecting to a DynamoDB Local instance, you have to tell the AWS SDK a default region. Simply add another environment variable for &lt;code&gt;AWS_DEFAULT_REGION&lt;/code&gt; and set it to &lt;code&gt;us-east-1&lt;/code&gt;. In the end, you should have configured the following environment variables:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/bitbucket-pipelines-environment-variables.png&quot; alt=&quot;Environment Variables for correct setup with Bitbucket Pipelines.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Required environment variables for successfully executing the tests.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;With this setup, you&apos;re prepared for extending your automated tests using Java, JUnit, DynamoDB Local, Testcontainers and Bitbucket Pipelines 🎉 I&apos;m looking forward to your feedback about this topic! How do you run your automated tests? Have you ever used Bitbucket Pipelines for this?&lt;/p&gt;
&lt;p&gt;You can also use this setup when working with serverless functions that load and save data from/to DynamoDB. A great example GitHub repository is &lt;a href=&quot;https://github.com/aws-samples/aws-sam-java-rest&quot;&gt;aws-sam-java-rest&lt;/a&gt; containing many examples for serverless functions. If you struggle writing serverless functions, read my blog posts about &lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;Going Serverless&lt;/a&gt;. In this context, it&apos;s also important to not load the data from the database each time. The article &lt;a href=&quot;/2018/12/16/caching-in-aws-lambda/&quot;&gt;Caching in AWS Lambda&lt;/a&gt; presents best practices how you properly cache data in serverless functions.&lt;/p&gt;
</content:encoded></item><item><title>Going Serverless - Why and How (2)</title><link>https://www.sebastianhesse.de/2020/03/31/going-serverless-why-and-how-2/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2020/03/31/going-serverless-why-and-how-2/</guid><description>Serverless best practices: keep functions small, use asynchronous communication, scale responsibly, and manage timeouts with Step Functions. Essential guide for production systems.</description><pubDate>Tue, 31 Mar 2020 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;After working for more than three years with AWS Lambda and other serverless services, I&apos;ve came across various best practices to improve your way of going serverless. Let me share with you how you can successfully develop your software using serverless functions from a technical perspective.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;💡&lt;strong&gt;Note:&lt;/strong&gt; If you are new to serverless functions, I suggest reading my previous blog post about &lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;Why Going Serverless&lt;/a&gt;. Also, please consider that this is not a blog post about the &lt;a href=&quot;https://serverless.com/&quot;&gt;Serverless framework&lt;/a&gt;. It&apos;s rather about serverless functions in general - or Function-as-a-Service (FaaS).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I have collected four best practices below that I think are the most important ones:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#small-functions&quot;&gt;Small Functions&lt;/a&gt;: keep your function&apos;s size as small as possible&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#communication&quot;&gt;Communication&lt;/a&gt;: use synchronous or asynchronous communication&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#scalability&quot;&gt;Scalability&lt;/a&gt;: scale responsibly!&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#time-management&quot;&gt;Time Management&lt;/a&gt;: appropriately use your execution time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, there are definitely more best practices available. Let me know in the comments which ones are important to you!&lt;/p&gt;
&lt;h2&gt;Small Functions&lt;/h2&gt;
&lt;p&gt;The first best practice is the &lt;strong&gt;most important one&lt;/strong&gt;. It&apos;s about keeping a small and limited function scope. The key is to focus on one particular use case following the &lt;a href=&quot;https://en.wikipedia.org/wiki/Single-responsibility_principle&quot;&gt;single responsibility principle&lt;/a&gt;. For example, let&apos;s consider your software is receiving webhooks from another service and is processing them. In this case you should create a function which receives the webhook data, makes some validity checks and forwards it to a different (internal) service or serverless function for further processing. In this case you should not include the processing steps into one single function.&lt;/p&gt;
&lt;p&gt;There are multiple reasons why you should keep your functions small like this. First, you can later reuse your functions from a different context, e.g. a processing function that can be called from various sources. Second, it makes testing your functions a lot easier if you focus on one task. Third, the performance and scalability of your function improves a lot.&lt;/p&gt;
&lt;h3&gt;Code Artifact Size&lt;/h3&gt;
&lt;p&gt;Having a small code artifact is a huge advantage in terms of cold starts. Keep in mind:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The bigger your code&apos;s artifact size, the slower the startup time.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In this context, two questions often come up: (1) Can I use framework/library X in my code? For example, people would like to continue using a framework like &lt;a href=&quot;https://spring.io/&quot;&gt;Spring (Java)&lt;/a&gt; as such frameworks are often used in more traditional architectures. And (2), can I use serverless functions as a REST API? That seems reasonable due to the good support of serverless functions and HTTP events. In terms of the code&apos;s artifact size and the resulting performance problems especially for Java, the general answer is &lt;strong&gt;No&lt;/strong&gt;, you shouldn&apos;t do any of that if it&apos;s not necessary.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-lambda-cold-start-diagram.jpg&quot; alt=&quot;Diagram illustrating AWS Lambda cold start performance impact&quot; /&gt;&lt;/p&gt;
&lt;p&gt;However, there are situations where it&apos;s acceptable to still do it and I&apos;m a big fan of those. Either, it can be reasonable if you do not care too much about the performance. For example, if a serverless function is running in the background. Or if you&apos;re using languages like &lt;a href=&quot;https://levelup.gitconnected.com/aws-lambda-cold-start-language-comparisons-2019-edition-%EF%B8%8F-1946d32a0244&quot;&gt;Python or Node.js which have a really good cold start performance&lt;/a&gt;. This is key when using serverless functions as a REST API. I &lt;strong&gt;can not recommend Java&lt;/strong&gt; for this use case.&lt;/p&gt;
&lt;p&gt;On the other side, you shouldn&apos;t be too serious about the cold start issue. If your functions get busy more and more (i.e. your software is more popular), they are kept warm for a long time. Thus, you won&apos;t hit the cold start that often. I have seen functions being warm for &lt;strong&gt;several hours&lt;/strong&gt; because they were busy processing data every few seconds.&lt;/p&gt;
&lt;h2&gt;Communication&lt;/h2&gt;
&lt;p&gt;The next important best practice is about the &lt;strong&gt;communication between serverless functions&lt;/strong&gt;. There are two ways you can communicate between them: synchronously and asynchronously. I definitely recommend you using asynchronous communication because in many cases it&apos;s the only choice you have but first let&apos;s have a look what it actually means.&lt;/p&gt;
&lt;h3&gt;Synchronous Communication&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/serverless-sync-communication.jpg&quot; alt=&quot;Architecture diagram showing synchronous communication pattern between AWS Lambda functions&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Synchronous communication&lt;/strong&gt; means directly calling another function by using a cloud provider&apos;s SDK. For AWS Lambda, you can &lt;em&gt;invoke&lt;/em&gt; a Lambda function with a payload and wait for it to return. Two problems can occur now: a) the calling function runs out of time while waiting for the other to return; b) the data payload you want to provide is too big and reaching the limits of the service, e.g. &lt;a href=&quot;https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html&quot;&gt;6MB for AWS Lambda&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Both problems can be solved more or less easily. For a), I either suggest increasing the function&apos;s timeout or restructuring your process to make the functions more independent (see &lt;strong&gt;asynchronous communication&lt;/strong&gt; below). For b), you can split a payload and call a function multiple times with each part. Now, the problem might be that your function does not support this scenario because it requires the full payload, not only a part of it.&lt;/p&gt;
&lt;p&gt;A better alternative is to use a service like S3 to upload the full payload first, provide a link to the uploaded file in the actual function&apos;s payload and then download the file within the called function. This approach leads to a slightly longer execution time and more costs, but in my opinion it&apos;s the only way of solving the problem.&lt;/p&gt;
&lt;h2&gt;Asynchronous Communication&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/serverless-async-communication-push.jpg&quot; alt=&quot;Architecture diagram showing asynchronous communication pattern with serverless functions pushing data to third-party service&quot; /&gt;&lt;/p&gt;
&lt;p&gt;First, push data to a third-party service...&lt;/p&gt;
&lt;p&gt;In contrast to synchronous communication, &lt;strong&gt;asynchronous communication&lt;/strong&gt; means calling another function in an indirect way. This involves a separate service in between two or more functions. For example, in an AWS Lambda function you can upload data to S3 which then asynchronously triggers another Lambda function to process this data. Or you can push data into a queuing service where other Lambda functions are consuming it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/serverless-async-communication-trigger.jpg&quot; alt=&quot;Architecture diagram showing serverless functions triggered asynchronously by third-party service events&quot; /&gt;&lt;/p&gt;
&lt;p&gt;... then let other function(s) be triggered as soon as new data arrives.&lt;/p&gt;
&lt;p&gt;Asynchronous communication helps you &lt;strong&gt;separating the concerns&lt;/strong&gt; within your architecture. And it makes your architecture more &lt;strong&gt;flexible&lt;/strong&gt;, because you can easily attach or detach functions to listen to events. Furthermore, you have better control of the data flow and better ways to increase/decrease the performance. As an example, if a lot of data is coming in to a third-party service like S3 and you have more than enough function capacity to consume it, then your performance will be very fast 🚀 That&apos;s often great and in most of the cases desired.&lt;/p&gt;
&lt;p&gt;However, by having less capacity of consumer functions, you can decrease the speed of data flow. This is necessary in certain situations. We&apos;ll discuss it in the next section about scalability.&lt;/p&gt;
&lt;h2&gt;Scalability&lt;/h2&gt;
&lt;p&gt;A common thought of people new to serverless is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Serverless functions scale automatically, I don&apos;t have to care about scalability.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This &lt;strong&gt;misconception&lt;/strong&gt; can easily lead to a lot of problems in your software. One major concern is that you do not take care of other services you&apos;re calling. For example, how can you be sure that the API you&apos;re calling at &lt;em&gt;api.example.org&lt;/em&gt; can also handle &quot;unlimited scalability&quot; ? Scalability is no free lunch, especially not in older systems. You have to consider this!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/serverless-scalability-buffer.jpg&quot; alt=&quot;Diagram showing serverless functions scalability with buffered request handling&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The solution to this problem is to properly limit the executions of your serverless functions. For AWS Lambda, an easy option is to &lt;a href=&quot;https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/&quot;&gt;limit concurrent executions&lt;/a&gt; to a small number. The option can be applied per function. However, that might not be sufficient as you then have to deal with cases like running out of capacity. Another solution is using a service to buffer any kind of request or execution of your function. This approach is reusing the asynchronous communication best practice from above. You take a service like Kinesis or SQS (instead of S3) where you can (more or less) limit the throughput of your data. Then these services will invoke your functions. For example, in Kinesis you can define how many shards a stream should have. The more shards you have, the more concurrent executions of Lambda functions are necessary.&lt;/p&gt;
&lt;h2&gt;Time Management&lt;/h2&gt;
&lt;p&gt;The last important point is about &lt;strong&gt;time management&lt;/strong&gt;. As you know, serverless functions are usually restricted to only run a few minutes (or even &lt;a href=&quot;https://developers.cloudflare.com/workers/about/limits/&quot;&gt;seconds for CloudFlare Workers&lt;/a&gt;!). Under this term I understand handling the uncertainties of your serverless functions without running into its time limits. Often people think &quot;&lt;em&gt;5 minutes are enough for my function to execute&lt;/em&gt;.&quot; But can you &lt;em&gt;really&lt;/em&gt; assure your function will never run out of time? Even 5 or 15 minutes can be over quite quickly if you&apos;re processing some data and have to interact with other services. (💡Hint: the previous best practice about small function size will hit you here if you don&apos;t follow it😉) You always need to consider that you&apos;re working in a network, i.e. an &lt;strong&gt;unreliable environment&lt;/strong&gt;. Anything can go wrong! Thus, always use &lt;a href=&quot;https://blog.runscope.com/posts/phil-sturgeon-taking-a-timeout-from-poor-performance&quot;&gt;reasonable timeouts&lt;/a&gt; when calling other services. Never assume they&apos;ll always respond like in your development tests.&lt;/p&gt;
&lt;p&gt;There are a three approaches to solve the timeout problem. The &lt;strong&gt;first approach&lt;/strong&gt; is using recursion. You can see an example using AWS Lambda code in the following picture:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-lambda-recursion-example.jpg&quot; alt=&quot;Example code showing recursion with AWS Lambda&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The code is regularly checking the remaining time your function is allowed to run before reaching its timeout. As long as a certain threshold of remaining time is not reached, you continue doing your computation. Otherwise, store your current state somewhere and call &quot;yourself&quot; again. You&apos;ve probably already seen this approach has two big flaws: First, you never know if the same function instance is used when you&apos;re calling yourself. That&apos;s up to the cloud provider to decide it. Secondly, you never know if the chosen threshold is high enough to not run into the timeout. Thus, I &lt;strong&gt;can not recommend this approach&lt;/strong&gt; but it&apos;d be possible.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;second approach&lt;/strong&gt; is making use of a separate server or container. Here, you outsource all processes where you&apos;re unsure how long it takes to run it. They&apos;ll run on a different service like EC2 or Fargate. You must think now &quot;why do you suggest using EC2 when you&apos;re talking about serverless?&quot; - and you&apos;re probably right 🤷‍♀️ But you must also admit that long-running tasks aren&apos;t made for serverless functions unless you can split them up into smaller tasks which can run on serverless functions again. And this way of splitting it up often leads to the last approach.&lt;/p&gt;
&lt;h3&gt;Recommended Approach&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;third approach&lt;/strong&gt; is using Step Functions as the execution engine of your process. Here&apos;s an example of a Step Function state machine using AWS Lambda functions:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-step-functions-example.jpg&quot; alt=&quot;Example using AWS Step Functions and AWS Lambda&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This example is very basic but it can be extended to running very complicated processes using loops and decisions. Using Step Functions is a great way of overcoming the timeout constraint if you can split up your tasks into smaller chunks. It&apos;ll take away the work to manage the execution of your functions. For example, it even lets you react on errors within your function, like an exception. For these reasons I believe that Step Functions is a service that is often undervalued but offers great features to complement the serverless experience.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Serverless is a great &quot;new&quot; way (it&apos;s already more than five years old) of writing software in the cloud. And with other developments, there are even situations where &lt;a href=&quot;https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/&quot;&gt;you don&apos;t even need serverless functions&lt;/a&gt; anymore. However, if you have read until this point, you realized what to look for when starting your journey to going serverless. And you should have recognized that using other services is a necessity if you want to be successful with serverless functions. At least in most of the cases.&lt;/p&gt;
&lt;p&gt;One disadvantage of serverless is that your architecture gets complicated quite quickly. As always in life and especially in the field of software engineering, there is a trade-off you have to make. For serverless functions, the trade-off for a more complicated architecture is almost no maintenance effort and automatic scalability 🚀 You have to decide if you want to pay the price. My recommendations partly cover topics from general recommendations in software engineering like you know from the SOLID principle or others. If you continue applying them, you&apos;ll also succeed in the serverless space 👍&lt;/p&gt;
&lt;p&gt;If you want to learn more about serverless, you can have a look at other blog posts here, like &lt;a href=&quot;/2018/12/16/caching-in-aws-lambda/&quot;&gt;Caching in AWS Lambda&lt;/a&gt; to improve the speed of your Lambda functions! Or watch one of my previous talks on these topics, like &lt;a href=&quot;https://www.youtube.com/watch?v=4_4IOFhhHYY&quot;&gt;serverless analytics and monitoring&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Going Serverless - Why and How (1)</title><link>https://www.sebastianhesse.de/2019/07/21/going-serverless-why-and-how-1/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2019/07/21/going-serverless-why-and-how-1/</guid><description>Start your serverless journey with this comprehensive guide. Learn why serverless matters, how to use Infrastructure as Code with SAM, and set up automated deployments.</description><pubDate>Sun, 21 Jul 2019 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;From monoliths to microservices and containers to serverless functions: the software engineering world is changing fast. Popular technologies from today will be outdated tomorrow and it isn&apos;t easy to follow them all. The same is true for taking the first step when going serverless. Hence I&apos;ll present you with my best practices for going serverless to save you time on choosing the right way.&lt;/p&gt;
&lt;p&gt;With this blog post, I&apos;m focusing on serverless functions like AWS Lambda. It&apos;s part of a series of two posts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Why &quot;Going Serverless&quot; and how to start (this)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/2020/03/31/going-serverless-why-and-how-2/&quot;&gt;Best practices for your architecture and development&lt;/a&gt; (next blog post)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Why Serverless&lt;/h2&gt;
&lt;p&gt;People often ask me why I&apos;d prefer serverless functions compared to typical setups using containers or alike. For me there is one simple answer: In a perfect world, I don&apos;t want to care about anything in my infrastructure. I don&apos;t want to know the operating system, which security updates it needs and which packages I have to install. &lt;strong&gt;It should just work&lt;/strong&gt;. The machines are handling it for me. I just need to know how it works in general and how I can use it. But not in detail. Other people who are more interested or knowledgable on this topic should take care of it. I want to develop my applications and provide a benefit for my customers. This is probably true for most users of serverless functions and the main reason why they adopt it. There are also more reasons like quick code updates (yes, updating your function&apos;s code can be &lt;strong&gt;very&lt;/strong&gt; quick) and high scalability.&lt;/p&gt;
&lt;p&gt;Besides the general infrastructure topic, also the choice of my application&apos;s frameworks change. Considering a fast startup of your functions, you can&apos;t include all the dependencies you want (will be discussed in more detail in the next blog post). For example, if you&apos;re used to the Spring Framework in Java, you usually include a lot of its modules to get Dependency Injection, Authentication, and more. This blows up the total size of your JAR file. Not only is this bad for your Lambda function&apos;s performance. Also, in the era of cloud services, you usually don&apos;t need to build this on your own. The key is to use managed services. These can be offered by your cloud provider or independent SaaS products. (This &lt;a href=&quot;https://martinfowler.com/articles/serverless.html#what-isnt-serverless&quot;&gt;often counts as &quot;Serverless&quot;&lt;/a&gt; as well) As a result, you&apos;ll also focus more on your own business logic instead of your infrastructure.&lt;/p&gt;
&lt;h2&gt;How to Start&lt;/h2&gt;
&lt;p&gt;Now that you know why you want to use serverless functions, you need to know the best ways for going serverless. As always, there are multiple ways to do that. First, you should know which serverless offerings are available. Here is an (incomplete) list:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://aws.amazon.com/lambda/&quot;&gt;AWS Lambda&lt;/a&gt; - the first serverless service&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://cloud.google.com/functions/&quot;&gt;Google Cloud Functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://azure.microsoft.com/en-in/services/functions/&quot;&gt;Microsoft Azure Functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.cloudflare.com/products/cloudflare-workers/&quot;&gt;CloudFlare Workers&lt;/a&gt; - similar, but only available at CloudFlare&apos;s CDNs and has different limitations&lt;/li&gt;
&lt;li&gt;more: &lt;a href=&quot;https://openwhisk.apache.org/&quot;&gt;Apache OpenWhisk&lt;/a&gt;, &lt;a href=&quot;https://fnproject.io/&quot;&gt;fn project&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All service offerings are similar to a certain extent. As mentioned earlier, I&apos;ll focus my blog post on AWS Lambda as this is the service I&apos;m the most comfortable with.&lt;/p&gt;
&lt;h3&gt;Infrastructure as Code&lt;/h3&gt;
&lt;p&gt;Regardless of which provider you choose, there is one important aspect you need to consider when going serverless: you must be able to describe your &lt;a href=&quot;https://www.hashicorp.com/resources/what-is-infrastructure-as-code&quot;&gt;infrastructure as code (IoC)&lt;/a&gt;. This is crucial to keep track of all the serverless functions (and their versions) which you&apos;ll create. Otherwise, I bet you&apos;ll lose the overview in your system. Moreover, by using IoC you&apos;ll be able to use an automated deployment &amp;amp; delivery process to ship your code faster and with fewer errors.&lt;/p&gt;
&lt;p&gt;Popular choices for such frameworks are &lt;a href=&quot;https://serverless.com&quot;&gt;Serverless.com&lt;/a&gt; or &lt;a href=&quot;https://aws.amazon.com/serverless/sam/&quot;&gt;Serverless Application Model (SAM)&lt;/a&gt;. Both have a strong focus on serverless functions, with the only difference that SAM can only be used for AWS whereas Serverless.com works for multiple cloud providers. As an alternative, you can consider using &lt;a href=&quot;https://www.terraform.io/&quot;&gt;Terraform&lt;/a&gt; as well. With all frameworks, you will have one or multiple JSON/YAML files describing the resources of your application stack. For instance, a database, your functions, log services, intermediary services like queues, and more. A sample definition of a serverless function using SAM looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MyServerlessFunction:
  Type: AWS::Serverless::Function
  Properties:
    Handler: com.example.MyServerlessFunction
    Runtime: java8
    CodeUri: target/my-function.jar
    MemorySize: 256
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can find a lot of ready-to-use examples for AWS Lambda and the SAM framework in my GitHub repositories &lt;a href=&quot;https://github.com/seeebiii/aws-lambda-boilerplate&quot;&gt;aws-lambda-boilerplate&lt;/a&gt; and &lt;a href=&quot;https://github.com/seeebiii/aws-cloudformation-templates&quot;&gt;aws-cloudformation-templates&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Code Deployments&lt;/h3&gt;
&lt;p&gt;Another important aspect when going serverless is to think about the way you want to deploy &amp;amp; deliver your function&apos;s code. In the best case, you can &lt;a href=&quot;https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment&quot;&gt;automate the whole deployment pipeline&lt;/a&gt;. You can choose tools like &lt;a href=&quot;https://bitbucket.org/product/features/pipelines&quot;&gt;Bitbucket Pipelines&lt;/a&gt;, &lt;a href=&quot;https://aws.amazon.com/codedeploy/&quot;&gt;AWS CodeDeploy&lt;/a&gt; with &lt;a href=&quot;https://aws.amazon.com/codepipeline/&quot;&gt;AWS CodePipeline&lt;/a&gt;, and others to support you with that. (And I strongly encourage you to use them!) Let&apos;s consider an example for Bitbucket Pipelines:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;image: openjdk:8
  
pipelines:
  default:
    - step:
        name: Build, test, deploy app
        script:
          - mvn package
          - mvn test
          - ./deploy-app.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This file configures when your pipeline runs (for each commit on every branch) and what is executed (package your code and deploy it). As you can see, the definition is really small and simple to understand. However, you can further customize it to your needs. For instance, if you use different branches like &lt;code&gt;develop&lt;/code&gt;, &lt;code&gt;master&lt;/code&gt; and other feature branches.&lt;/p&gt;
&lt;p&gt;Similarly, it&apos;s not only important &lt;em&gt;what&lt;/em&gt; you deploy, but also &lt;em&gt;how&lt;/em&gt;. Especially Lambda functions can be a bit tricky here. By default, as soon as you update a Lambda function&apos;s code, AWS Lambda will immediately take this code for all upcoming requests made to your function. On the one hand, this means you&apos;ll get a very quick code update. On the other hand, if you introduce a bug, all of your requests will be processed with this bug in your code. Therefore it makes sense to use &lt;a href=&quot;https://martinfowler.com/bliki/CanaryRelease.html&quot;&gt;canary releases&lt;/a&gt;. It lets you partially increase the traffic to a new version of your Lambda function without immediately exposing the new code to all users. Both the &lt;a href=&quot;https://serverless.com/blog/manage-canary-deployments-lambda-functions-serverless-framework/&quot;&gt;Serverless framework&lt;/a&gt; and &lt;a href=&quot;https://github.com/awslabs/serverless-application-model/blob/master/docs/safe_lambda_deployments.rst&quot;&gt;SAM&lt;/a&gt; support you here.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In conclusion, going serverless can make a lot of fun. No more messing around with low-level details, but instead focusing on the application. However, you still need to know a few details when and how to use serverless in the best way. I have provided you with a rather high-level introduction to that. The next blog post will focus on a lot more details from a developer&apos;s perspective. For example, how to write performant Lambda functions, how to combine them in an event-driven world and which other tools or services you can use. In the meantime, I can encourage you to check out this talk about &lt;a href=&quot;https://speakerdeck.com/danilop/taking-serverless-to-the-next-level-4e03cdc6-bdf8-4fc8-880a-cafcf8d6eca1&quot;&gt;taking serverless to the next level&lt;/a&gt; by Danilo Poccia.&lt;/p&gt;
</content:encoded></item><item><title>Visiting JavaLand 2019</title><link>https://www.sebastianhesse.de/2019/03/31/visiting-javaland-2019/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2019/03/31/visiting-javaland-2019/</guid><description>JavaLand 2019 key takeaways: microservices transactions with SAGA pattern, Domain Driven Design principles, and Web API design best practices for Java developers.</description><pubDate>Sun, 31 Mar 2019 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Last week I have attended &lt;a href=&quot;https://www.javaland.eu&quot;&gt;JavaLand 2019&lt;/a&gt; which was a great experience for me. Lots of different people, interesting talks and great speakers! It was my first time there and I have to admit: not only the usual conference content was great, but also the location was really unique - it&apos;s located in Phantasia Land, an amusement park in Cologne, Germany.&lt;/p&gt;
&lt;p&gt;I gained a lot of new ideas and thoughts which I want to share with you here. This is not a super-detailed summary of each talk, so don&apos;t expect a complete lecture. Instead, I want to give you an idea of some talks, so you can dig deeper if you think it&apos;s interesting for you. The reason is quite simple: Doing something on your own will bring the best learning experience. 😊&lt;/p&gt;
&lt;h1&gt;My Lessons Learned (or: TL;DR)&lt;/h1&gt;
&lt;p&gt;Based on the talk program and content, you can see a rough direction where the Java world is heading to and what&apos;s currently going on there. Most of the talks I&apos;ve visited have been around the following two topics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Microservices, Microservices, Microservices: You should know what microservices are and how you can use them. Especially transactions in microservices are tricky, hence there were actually two talks about this topic. A common suggestion was to read the book &quot;&lt;a href=&quot;https://www.amazon.de/Microservice-Patterns-examples-Chris-Richardson/dp/1617294543&quot;&gt;Microservices Patterns&lt;/a&gt;&quot;.&lt;/li&gt;
&lt;li&gt;Domain Driven Design (DDD): If you don&apos;t know what it is, you should have a look at a few articles online or read the &lt;a href=&quot;https://www.amazon.de/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215&quot;&gt;blue book about DDD&lt;/a&gt;. There is also another book &quot;&lt;a href=&quot;https://www.amazon.de/gp/product/0321834577&quot;&gt;Implementing Domain Driven Design&lt;/a&gt;&quot; which covers a more practical approach and was recommended in a lot of talks as well.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Besides that, a lot of talks focused on testing topics (e.g. testing with Docker containers, CI/CD), on recent Java/JDK updates (e.g. license change; GraalVM to convert Java to native code) or on anything related to the Java world (e.g. how to run a business on open source code).&lt;/p&gt;
&lt;h1&gt;My Top 3 Talks&lt;/h1&gt;
&lt;p&gt;It&apos;s a hard decision to present the best three talks, because there were a lot of good talks. I&apos;ve decided to present talks where &lt;strong&gt;a)&lt;/strong&gt; I&apos;ve learned the most about the topic, &lt;strong&gt;b)&lt;/strong&gt; which were presented in a good way and &lt;strong&gt;c)&lt;/strong&gt; where you can also benefit from.&lt;/p&gt;
&lt;h2&gt;Web-API-Design in Java (by Stephan Müller)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Slides:&lt;/strong&gt; &lt;a href=&quot;https://www.javaland.eu/formes/pubfiles/11140725/2019-nn-stephan_mueller-web-api-design_in_java-praesentation.pdf&quot;&gt;click here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;First statement of the talk: the API is the UI of a developer. I really like this quote. He continued to show the five different ways how to build an API: 1) REST, 2) GraphQL, 3) Server-side events, 4) web sockets, 5) Event Feeds. The main thing to remember is: REST is good to be used for cases where you have data with certain boundaries, i.e. you just return well-defined data, whereas GraphQL is really helpful if your data is interrelated und you need to do a lot of connections between them. In that case, you can save a lot of HTTP requests by doing one request to your GraphQL backend. The remaining talk was covering best practices. From documenting your API (e.g. using Open API Specification), correct error handling (e.g. never return a stack trace!), data validation (e.g. using Java Bean Validation) over security (e.g. don&apos;t use BasicAuth, instead use JWT) to versioning of your API (e.g. using the URL or an Accept-Header with a versioned media type).&lt;/p&gt;
&lt;p&gt;Read more: &lt;a href=&quot;http://apistylebook.com/design/guidelines/&quot;&gt;API Design Guidelines collection&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I have to admit: I&apos;m already using a lot of the presented best practices. But it&apos;s always good to double-check that again from time to time. Are you using everything of that? For the German speaking people here, I can highly recommend the book &quot;&lt;a href=&quot;https://www.amazon.de/REST-HTTP-Entwicklung-Integration-Architekturstil/dp/3864901200&quot;&gt;REST und HTTP&lt;/a&gt;&quot; which is related to this topic. I&apos;ve read it and it covers a lot of good recommendations with really good examples!&lt;/p&gt;
&lt;h2&gt;Microservices and Transactions (by Lars Röwekamp)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Slides:&lt;/strong&gt; &lt;a href=&quot;https://de.slideshare.net/_openknowledge/microservices-und-transaktionen-mittendrin-statt-nur-dabei&quot;&gt;click here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The talk started by presenting why &lt;a href=&quot;https://www.enterpriseintegrationpatterns.com/ramblings/18_starbucks.html&quot;&gt;starbucks does not use two phase commits&lt;/a&gt;. The real world is often not transactional, so why should you? Do you really need transactions? Can you solve the problem on the business level instead? Often this is the case! If transactions are really necessary, then use one of the following strategies:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Think about your service boundaries again. You can merge services if it makes sense, but try to avoid building a new monolith again. 😀&lt;/li&gt;
&lt;li&gt;Use a gateway service for transactions using &lt;a href=&quot;https://en.wikipedia.org/wiki/X/Open_XA&quot;&gt;XA (eXtended Architecture)&lt;/a&gt; and a two-phase commit protocol. (I think I haven&apos;t heard about the term &quot;XA&quot; before, but I do think it&apos;s a bad idea to use a centralized gateway service to manage all transactions across different microservices)&lt;/li&gt;
&lt;li&gt;DIY two-phase commit XA gateway - simple conclusion: don&apos;t do it at home.&lt;/li&gt;
&lt;li&gt;Transactions using the &lt;a href=&quot;https://microservices.io/patterns/data/saga.html&quot;&gt;SAGA pattern&lt;/a&gt;. (I have to admit I haven&apos;t heard about this pattern before) The basic principle is that you distribute business transactions into multiple technical ones, for example &quot;create pending order&quot; → &quot;check &amp;amp; reserve credit limit&quot; → &quot;approve order with reserved credit limit&quot;. In case of errors you have to stop and cancel (read: reset/redo) the previous operations. There are basically two ways to achieve this:
&lt;ol&gt;
&lt;li&gt;using a &lt;strong&gt;choreography&lt;/strong&gt;, meaning implicitly controlling the flow through events
&lt;ol&gt;
&lt;li&gt;Problem: increasing complexity + your business code is distributed over your architecture&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;using an &lt;strong&gt;orchestration&lt;/strong&gt;, meaning explicitly controlling the flow by calling services directly with one master service → easier reset possible in case an error happens
&lt;ol&gt;
&lt;li&gt;Problem: higher challenge to coordinate everything&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Recommendation&lt;/strong&gt;: use choreography if the process is quite simple (e.g. less than 5 steps), otherwise use orchestration. However, it&apos;s not as easy as it might sound. For example, what happens if a microservice dies while performing the transaction? How can it catch up again and maybe resume the previous work (if required) ? Hence he&apos;s suggesting to use a framework to support you with that, for example &lt;a href=&quot;https://github.com/eventuate-tram/eventuate-tram-sagas&quot;&gt;Eventuate Tram Sagas&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There was another talk about this topic, called &quot;&lt;a href=&quot;https://www.javaland.eu/formes/pubfiles/11143355/2019-nn-bernd_ruecker-lost_in_transaction_data_consistency_in_distributed_systems-praesentation.pdf&quot;&gt;Lost in Transaction? Data consistency in distibuted systems&lt;/a&gt;&quot; by Bernd Ruecker, the founder of &lt;a href=&quot;https://camunda.org/&quot;&gt;Camunda&lt;/a&gt; (a workflow engine to support such cases).&lt;/p&gt;
&lt;p&gt;Btw. a book recommendation by Lars: &quot;&lt;a href=&quot;https://www.amazon.de/Microservice-Patterns-examples-Chris-Richardson/dp/1617294543&quot;&gt;Microservices Patterns&lt;/a&gt;&quot;.&lt;/p&gt;
&lt;h2&gt;Hitchhiker&apos;s Guide to Serverless (by Lars Röwekamp)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Slides:&lt;/strong&gt; &lt;a href=&quot;https://de.slideshare.net/_openknowledge/surviving-serverless-in-reallife&quot;&gt;click here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Using serverless in your app can lead to many different points of failures. Failures in your code, in the integration of functions or in services where you have no control of. Hence it&apos;s necessary to use proper monitoring and testing of your functions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Monitoring:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DIY: build your own tracing and monitoring solution, e.g. using serverless functions, and store data in different services, so you can analyze them as needed&lt;/li&gt;
&lt;li&gt;Use cloud services: use already provided services like X-Ray or CloudWatch to monitor your metrics or inspect the runtime behaviour of your functions&lt;/li&gt;
&lt;li&gt;Use external services: in case of a multi-cloud strategy you have to use services independent of your cloud provider, e.g. &lt;a href=&quot;http://logz.io/&quot;&gt;logz.io&lt;/a&gt; or &lt;a href=&quot;https://dashbird.io/&quot;&gt;dashbird&lt;/a&gt; or ELK-tracing.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Monitoring Tips:&lt;/strong&gt; monitor asynchronously (i.e. a user shouldn&apos;t notice a higher latency because of monitoring), also monitor business relevant metrics (e.g. sales volumes, etc. → if this decreases unexpectedly, then you know something is wrong in your system)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Testing Tips:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;make sure to separate business logic and infrastructure glue code&lt;/li&gt;
&lt;li&gt;write unit tests (e.g. using JUnit), write integration tests (e.g. execute functions locally and mock certain services → saves you money), write end-to-end tests (e.g. by running your cloud locally or at least in a separate dev environment which you can shutdown as soon as the tests are done to save money)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you have further questions to this topic, let me know about it! I hope you enjoyed this short summary of my JavaLand 2019 experience.&lt;/p&gt;
</content:encoded></item><item><title>Caching in AWS Lambda</title><link>https://www.sebastianhesse.de/2018/12/16/caching-in-aws-lambda/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2018/12/16/caching-in-aws-lambda/</guid><description>Improve AWS Lambda performance and reduce costs with caching strategies. Compare simple caching, DynamoDB cache, Redis, and ElastiCache for serverless functions.</description><pubDate>Sun, 16 Dec 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In every &lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;serverless application&lt;/a&gt;, there are usually two main reasons to cache data: a) to improve performance and b) to reduce costs. Caching in AWS Lambda is not different. Instead, the reasons for caching might be even more important in this context. This blog post explains why it could be necessary for you and shows how to implement different caching options. In other words: How to find the best AWS Lambda cache option! This blog post is based on a talk I gave at the &lt;a href=&quot;https://www.meetup.com/de-DE/AWS-UserGroup-Stuttgart/events/256265590/&quot;&gt;AWS User Group Stuttgart meetup&lt;/a&gt; in December 2018. You can &lt;a href=&quot;https://speakerdeck.com/sebastianhesse/caching-in-aws-lambda&quot;&gt;find the slides here&lt;/a&gt; and the code is provided in &lt;a href=&quot;https://github.com/seeebiii/caching-in-aws-lambda&quot;&gt;this GitHub repo&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Reasons for Caching&lt;/h2&gt;
&lt;p&gt;Let&apos;s talk about the reasons first, why you need caching in AWS Lambda. Often a Lambda function will not only do some internal/local processing. It also calls other systems or services. This could be a database like DynamoDB or any kind of service with an API. Such calls might be costly, e.g. in terms of time to wait for a response or actual money if the pricing is based on number of API requests. Since you pay for the execution time of a Lambda function as well, waiting for a response will also cost you money in the end. You can easily save some time (and money 😉) if you don&apos;t have to wait for &lt;strong&gt;expensive API calls&lt;/strong&gt; in each execution and instead cache certain data. Furthermore, the cost reason is even more dramatic if you think about the scalability of a function. Just imagine the following scenario:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-lambda-caching-external-services.png&quot; alt=&quot;calling external services from a Lambda function&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Calling external services from a Lambda function&lt;/p&gt;
&lt;p&gt;You need to understand two things here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Your Lambda function will make the same request for &lt;strong&gt;each invocation&lt;/strong&gt; of the same function&apos;s instance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Each instance&lt;/strong&gt; of your Lambda function will make the same request as well.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In many use cases it&apos;s not necessary to make expensive calls again and again. Hence you can save a lot by simply caching your data for a certain amount of time. Before we look at the different caching options, we quickly need to investigate how the code of a Lambda function is executed. With this knowledge, we can then introduce caching into our functions.&lt;/p&gt;
&lt;h2&gt;Execution Process of a Lambda Function&lt;/h2&gt;
&lt;p&gt;Each Lambda function is experiencing the same execution process: if no instance of your Lambda function is available or all existing instances are busy with invocations, a new instance is started (a &quot;&lt;a href=&quot;/2017/06/24/5-things-consider-writing-lambda-function/&quot;&gt;&lt;strong&gt;cold start&lt;/strong&gt;&lt;/a&gt;&quot;). This cold start involves an initial start of the Lambda runtime (e.g. a Node runtime or a JVM in Java). If you&apos;re using languages like Python or Node.js, you&apos;ll generally have really good cold start performance. However, &lt;a href=&quot;/2021/02/14/using-spring-boot-on-aws-lambda-clever-or-dumb/&quot;&gt;using Java or Spring Boot&lt;/a&gt; with Lambda requires more consideration due to the JVM startup overhead. This includes some initialization code which e.g. is declared before your Lambda function handler. In NodeJS this could be requiring a certain dependency or read an environment variable. In Java it might be importing classes and doing some field initializations in your constructor. After this initialization phase the actual function handler is called. Here is a NodeJS example from my slides explaining this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-lambda-execution-process.png&quot; alt=&quot;simplified aws lambda execution process&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Simplified AWS Lambda execution process&lt;/p&gt;
&lt;p&gt;One advantage is that the variables outside of the handler function will survive the Lambda invocations. That means you can set a value in one invocation and it will be accessible within all following invocations. This works for one Lambda instance until that specific Lambda instance is shutdown. The caching &quot;trick&quot; is now that you make use of these variables outside of your handler function scope. This is described in more detail below.&lt;/p&gt;
&lt;h2&gt;Caching Options&lt;/h2&gt;
&lt;p&gt;Now, in order to cache your data, you have three options available:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;simple caching&lt;/strong&gt; by using simple variables declared outside of a Lambda handler function,&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DynamoDB caching&lt;/strong&gt; by using DynamoDB as our cache,&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;custom caching&lt;/strong&gt; by using a caching library on separate servers or&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;managed caching&lt;/strong&gt; by using a managed caching service.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Simple Caching&lt;/h3&gt;
&lt;p&gt;With a &lt;a href=&quot;https://github.com/seeebiii/caching-in-aws-lambda/tree/master/02-lambda-with-simple-caching&quot;&gt;simple caching approach&lt;/a&gt;, you declare a variable outside of a Lambda handler function which stores your cached data. In Node.js, this could be a simple key value object. In Java, you can use a HashMap or other Map implementations. For example, you can store &quot;userkey_123&quot; as key and &quot;John Smith&quot; as value. The general process will look like this: In the first call of your Lambda function, you&apos;ll check if you have a key &quot;userkey_123&quot; stored in your local cache variable. If yes, you&apos;ll use it and continue in your code. If not, you&apos;ll make a call to your external service and store the response in your local cache. It&apos;s pretty easy, hence &lt;em&gt;simple caching&lt;/em&gt;. Here is an example written in Node.js:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;let cachedValue;

module.exports.handler = function(event, context, callback) {
    console.log(&apos;Starting Lambda.&apos;);

    if (!cachedValue) {
        console.log(&apos;Setting cachedValue now...&apos;);
        cachedValue = &apos;Foobar&apos;;
    } else {
        console.log(&apos;Cached value is already set: &apos;, cachedValue);
    }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can extend the code to also expire the cache contents after some time. Popular libraries are &lt;a href=&quot;https://www.npmjs.com/package/node-cache&quot;&gt;node-cache&lt;/a&gt; for Node.js or the &lt;a href=&quot;https://github.com/google/guava/wiki/CachesExplained&quot;&gt;Cache class from Guava library&lt;/a&gt; for Java.&lt;/p&gt;
&lt;p&gt;However, there are two things you need to consider: First, make sure that you &lt;strong&gt;correctly scope your cached data&lt;/strong&gt;. That means, take care that your Lambda function might also be called in different contexts. For example, if your function can be invoked with different parameters, e.g. with different user objects, make sure you don&apos;t accidentally leak data to a different context. In such a case you can scope a key to prevent that, for example by using &lt;em&gt;cache[userkey]&lt;/em&gt; or some other identifier. Second, please consider that this cache is only accessible for one particular Lambda function instance. If your Lambda function is experiencing a lot of traffic and multiple instances have been started, then each Lambda function has its own local cache. These &lt;strong&gt;local caches are not synchronized&lt;/strong&gt;! However, using such a local cache still helps you avoiding the same expensive calls within the same instance 😊 If you need to synchronize cached data between the instances, you should consider using one of the following caching options.&lt;/p&gt;
&lt;h3&gt;DynamoDB Caching&lt;/h3&gt;
&lt;p&gt;In a traditional caching setup, you&apos;d use a system like Redis or Memcached to cache your data. This is also discussed below in the &lt;a href=&quot;#custom-caching&quot;&gt;Custom Caching&lt;/a&gt; and &lt;a href=&quot;#managed-caching&quot;&gt;Managed Caching&lt;/a&gt; sections. However, since DynamoDB has such great response times of low double digits within the AWS network (in my experience also often below 10ms), it&apos;s a great alternative to use DynamoDB as our caching service. This means, instead of using a local variable outside of your handler function, you just make a quick call to your DynamoDB cache table and retrieve the cached data from there. So the process looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-lambda-caching-external-service.png&quot; alt=&quot;AWS Lambda cache data in an external service.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Cache data in an external service like DynamoDB. Works similarly when using a custom cache solution or AWS ElastiCache, see sections below.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Check if your DynamoDB cache table contains an entry for a specific key.&lt;/li&gt;
&lt;li&gt;If yes, continue with the cached value.&lt;/li&gt;
&lt;li&gt;If no, make a call to your external service and store your data in your cache table.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This has the &lt;strong&gt;big advantage&lt;/strong&gt; that all of your Lambda functions can benefit from the cache. It&apos;s like simple caching on steroids because the cache is kind of synchronized between all Lambda functions. You can further improve this setup by using &lt;a href=&quot;https://aws.amazon.com/dynamodb/dax/&quot;&gt;DynamoDB Accelerator (DAX)&lt;/a&gt; to reduce the response times even more. (Consider that DAX might introduce new challenges 😉 )&lt;/p&gt;
&lt;h3&gt;Custom Caching&lt;/h3&gt;
&lt;p&gt;In a &lt;a href=&quot;https://github.com/seeebiii/caching-in-aws-lambda/tree/master/03-lambda-with-custom-caching&quot;&gt;custom caching approach&lt;/a&gt;, you make use of an existing caching library/system (e.g. Hazelcast, Redis or Memcached). You can use such a library and host it on your machines, e.g. on EC2 instances. This can be useful if you already have EC2 instances in your stack and want to add some more to provide you with a cache cluster. After setting up your cache, you simply add some code to your Lambda functions and connect to your own cache. With this approach, you keep the same access workflow as before: check if the cache contains your key =&amp;gt; if not, make your expensive request once and then add the key value pair to the cache. This usually works fine as long as you let your Lambda functions only connect as a client and not as a full cluster node. A full cluster node like in Hazelcast would first synchronize a big chunk of data, because it&apos;s part of the cluster. For a Lambda function, this won&apos;t be acceptable.&lt;/p&gt;
&lt;h4&gt;Moving to a VPC&lt;/h4&gt;
&lt;p&gt;Although the custom caching might sound like a good solution to you, you need to be aware of a catch here: by default, &lt;a href=&quot;https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html&quot;&gt;EC2 instances are placed into a default (public) VPC&lt;/a&gt; in your AWS account. This also means, they will be open to the public and potentially everyone could access your cache. You probably want to avoid that. Hence you should use a custom VPC (and a private subnet) where you can put your EC2 cache instances into.&lt;/p&gt;
&lt;p&gt;However, a second problem arises with this move. By default, a Lambda function is also placed into a default (public) VPC. The problem is, by default &lt;a href=&quot;https://docs.aws.amazon.com/lambda/latest/dg/vpc.html&quot;&gt;you cannot access any resource inside a custom VPC&lt;/a&gt; from within your Lambda function. Hence, you also have to move your Lambda functions into the same VPC as your EC2 instance where your cache is running on. Unfortunately, this might increase the startup time of your Lambda functions. The reason is that AWS has to dynamically create an Elastic Network Interface (ENI) and attach it to your Lambda function instance. With an ENI your Lambda function can access the resources. (Update: &lt;a href=&quot;https://aws.amazon.com/blogs/compute/announcing-improved-vpc-networking-for-aws-lambda-functions/&quot;&gt;AWS has improved this situation a lot&lt;/a&gt;, so the disadvantage isn&apos;t that big anymore)&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/seeebiii/caching-in-aws-lambda/blob/master/03-lambda-with-custom-caching/cfn.yml&quot;&gt;CloudFormation template in my repository&lt;/a&gt; shows you how to setup such an infrastructure. I&apos;m using a VPC with a public subnet, a subnet for my Lambda functions and a subnet for the EC2 cache instances. The Lambda functions have access to the cache instances in the cache subnet. They can also communicate to the internet by leveraging a NAT Gateway which sits in the public subnet.&lt;/p&gt;
&lt;h3&gt;Managed Caching&lt;/h3&gt;
&lt;p&gt;In a &lt;a href=&quot;https://github.com/seeebiii/caching-in-aws-lambda/tree/master/04-lambda-with-managed-caching&quot;&gt;managed caching approach&lt;/a&gt;, the setup compared to the custom caching approach is pretty similar. The big difference is that you will use a managed caching service instead of provisioning your own caching instances. For example, you can use &lt;a href=&quot;https://aws.amazon.com/elasticache/&quot;&gt;AWS ElastiCache&lt;/a&gt; which is based on Redis or Memcached. Both are widely supported caching systems with SDKs for a lot of languages. As said, the setup is similar: you&apos;ll have to place the ElastiCache cluster into a VPC and move your Lambda functions into the VPC as well. In contrast to the previous approach there are certain advantages using a managed service: a quick setup is possible, no maintenance work is necessary on your side and there is a good compatibility of Redis and Memcached. Especially the maintenance aspect might be worth to consider! The disadvantages are the same like for the custom caching: your Lambda functions take longer to start, because the ENI needs to be created dynamically.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;After discussing all options for caching in AWS Lambda, it&apos;s time to give a quick conclusion. In my opinion you should prefer the simple caching solution wherever possible. One use case might be that you retrieve data from an API which does not change quite often, e.g. some settings. This is easy to cache but saves you a lot of time. If you want to optimize this, use a DynamoDB table instead - this works pretty well for most of the use cases. If you really need some advanced caching mechanism, then you should use a custom or managed caching solution. The startup time of your function will take longer, but AWS will also bring some updates in 2019 made some updates to improve that. For more &lt;a href=&quot;/2020/03/31/going-serverless-why-and-how-2/&quot;&gt;best practices for your serverless architecture and development&lt;/a&gt;, check out my comprehensive guide on serverless patterns. I recommend watching a bit of the video &lt;a href=&quot;https://www.youtube.com/watch?v=QdzV04T_kec&amp;amp;t=2400&quot;&gt;A Serverless Journey: AWS Lambda under the hood&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Remove old CloudWatch log groups of Lambda functions</title><link>https://www.sebastianhesse.de/2018/10/07/remove-old-cloudwatch-log-groups-of-lambda-function/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2018/10/07/remove-old-cloudwatch-log-groups-of-lambda-function/</guid><description>Automatically remove orphaned CloudWatch log groups from deleted Lambda functions. Python script using CloudFormation stack to identify and clean up old logs.</description><pubDate>Sun, 07 Oct 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Do you recognize this view when looking into your CloudWatch log groups? Each AWS Lambda function has an associated CloudWatch log group. However, there is no cleanup process available as soon as a relationship between a CloudWatch log group and Lambda function expires. In that case it&apos;s necessary to remove these old log groups manually. In this post I&apos;ll show you an easy way to always have a clean set of CloudWatch log groups by automatically removing old log groups.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Before you start:&lt;/strong&gt; use the following script with care! I do &lt;strong&gt;not&lt;/strong&gt; recommend to use this in any AWS production account. Instead, you could for example use it in your own developer account.&lt;/p&gt;
&lt;p&gt;Removing old CloudWatch log groups which do not belong to a Lambda function anymore is a tedious process. I guess you don&apos;t like to remove them manually, right? Same for me. Therefore I&apos;ve posted a small &lt;a href=&quot;https://gist.github.com/seeebiii/c91815200915a17131c9d908c525c357&quot;&gt;script on GitHub&lt;/a&gt; which is removing such old CloudWatch log groups. The are multiple reasons why old and unused CloudWatch log groups exist. One of reason is that a Lambda function got a new name or simply does not exist anymore. Like in the screenshot above, the name simply changed from &quot;MyFunction&quot; to &quot;NewFunction&quot;.&lt;/p&gt;
&lt;p&gt;But there are ways to automate this process. My script works basically like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Get all Lambda function names from your target CloudFormation stack, e.g. &lt;code&gt;my-stack-MyFunction-1A2B3C&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Get all CloudWatch log group names, e.g. &lt;code&gt;/aws/lambda/my-stack-MyFunction-1A2B3C&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Loop over all CloudWatch log group names
&lt;ol&gt;
&lt;li&gt;Retrieve the Lambda function name by removing &lt;code&gt;/aws/lambda/&lt;/code&gt; from the log group name&lt;/li&gt;
&lt;li&gt;Remove all log groups which do belong to the CloudFormation stack, but do not match with any function&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As you can see, this script only works if you use CloudFormation to manage your Lambda functions. However, if you adjust the code to your needs, this should also work in different situations. You only have to make sure that you don&apos;t accidentally remove too many log groups. It&apos;s up to you how you identify that. Eeeasy!&lt;/p&gt;
&lt;p&gt;Another recommendation from my side: Put this cleanup Lambda function into a separate maintenance stack and let the function execute each night. For example, use a &lt;a href=&quot;https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html#RateExpressions&quot;&gt;Rate Expression&lt;/a&gt; to schedule your Lambda function with &apos;&lt;em&gt;rate(1 day)&lt;/em&gt;&apos; for a daily execution. Then you&apos;ll have a clean set of CloudWatch log groups every day. Furthermore, such a maintenance stack is a nice thing, because it gives you also further options &lt;a href=&quot;/2018/04/22/shut-down-cloudformation-stack-resources-over-night-using-aws-lambda/&quot;&gt;like shutting down certain AWS resources over night&lt;/a&gt; which will save you money as well.&lt;/p&gt;
</content:encoded></item><item><title>My first time on a bigger stage</title><link>https://www.sebastianhesse.de/2018/09/23/my-first-time-on-a-bigger-stage/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2018/09/23/my-first-time-on-a-bigger-stage/</guid><description>Essential lessons from giving a conference talk at Atlas Camp 2018: why storytelling matters, how much preparation time you really need, and presentation techniques that work.</description><pubDate>Sun, 23 Sep 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;On the 7th of September, I gave a talk at &lt;a href=&quot;https://www.atlassian.com/atlascamp&quot;&gt;Atlas Camp 2018 in Barcelona&lt;/a&gt;. You can watch it &lt;a href=&quot;https://www.youtube.com/watch?v=XfpskCjfBBw&quot;&gt;here on YouTube&lt;/a&gt;. It was my first time on a bigger stage and about 80-100 people listened to my words - amazing! With this blog post I want to say thanks to all who helped me achieving this and share some lessons learned with you.&lt;/p&gt;
&lt;p&gt;Atlas Camp is a conference organized by Atlassian, the company behind Jira and Confluence and many other great software. Here, vendors and partners receive and share knowledge with each other. The main topic is about software development, but also marketing or business topics are covered. The speakers are not only from Atlassian itself, but also from outside. So this year, I took the opportunity to present our experiences we&apos;ve made at &lt;a href=&quot;https://www.k15t.com&quot;&gt;K15t Software&lt;/a&gt; where I work as a Software Engineer. Last year, we&apos;ve migrated our Jira app &lt;a href=&quot;https://marketplace.atlassian.com/apps/1215199/backbone-issue-sync-for-jira&quot;&gt;Backbone Issue Sync&lt;/a&gt; to the cloud using AWS Lambda. AWS Lambda provides an innovative approach for running code in the cloud. Instead of spawning up a server/docker container, it is dynamically instantiating small functions. The migration was not easy for us, also because we only had little experiences with AWS Lambda before. However, luckily, we figured out a few things before launching, but there are things you will only face when your app is already running. Hence, &lt;a href=&quot;https://www.youtube.com/watch?v=XfpskCjfBBw&quot;&gt;watch the video on YouTube&lt;/a&gt; in order to learn our experiences with AWS Lambda. And if you have made your own experiences with it, don&apos;t hesitate to share them with me!&lt;/p&gt;
&lt;h2&gt;Lesson learned: Having a good story is key&lt;/h2&gt;
&lt;p&gt;Now I&apos;m coming back to my actual topic of this blog post. Have you ever given a talk in front of many people you don&apos;t even know? Even if not, you might know that there is a lot of preparation involved for giving such a talk. And for me, one lesson is that I&apos;ve underestimated this fact a bit. Initially I thought: &quot;&lt;em&gt;I know what I want to tell. I prepare the slides, prepare my speech a few times and give the damn talk.&lt;/em&gt;&quot; That&apos;s naive and does not reflect the reality. I had a story in my mind when submitting my talk proposal, but it turned out this story was hard to fit into a nice and shiny talk. It was confusing in the end, due to jumping back and forth between topics. The style was too much of a software developer (using pointers &amp;amp; references) instead of a story teller. Fortunately, I have talked to a lot people and asked them for feedback. With this feedback I have adjusted my story to have a straight line without jumps in between. So, this is another good advice for you: If you want to improve your talk, keep sharing your story and let people critize it. You can only learn from that ;-)&lt;/p&gt;
&lt;h2&gt;Lesson learned: Preparation takes time&lt;/h2&gt;
&lt;p&gt;Not only the story is important, but also how you present it. In order to present your talk in a good way, you need to prepare it. You need to repeat your words over and over again. In my case, I&apos;ve started to prepare (meaning: standing in my living room and talking loud to a wall) my talk about 5 weeks before. I&apos;ve not counted the times, but it must have been more than 20 times that I stood up and repeated it. This is more than 13 hours non-stop talking! Not including the time afterwards to change slides or think about the wrong words I have used. The lesson learned is: preparation takes time! However, the positive effect is that you get into some kind of auto mode when presenting it later. If you&apos;ve prepared your talk a few times, you don&apos;t need to think about the words on stage - you&apos;ll remember them more easily with a decent preparation. But even if you do this a few times, don&apos;t forget to let others see your performance. Let them give you feedback about it. This is invaluable!&lt;/p&gt;
&lt;h2&gt;Lesson learned: How you present matters&lt;/h2&gt;
&lt;p&gt;Another lesson for me was about the way I was presenting myself and the slides on stage. Especially the usage of a video recording beforehand helped me figure out how to improve my gestures. One week before the actual talk at Atlas Camp I gave the talk in front of my coworkers. It has been recorded and you might wonder why... Well, if you see yourself on camera, you&apos;ll notice the mistakes you made more easily. For example, I always thought my hand gestures or certain face expressions are already enough to support my message. Simply because it felt weird to move my hands a lot. But when I was watching the recorded video, I realized the gestures were relatively small. But this is important, because in the worst case (i.e. the audience does not notice the movement), your message wouldn&apos;t be supported at all. If you are curious how you can improve your gestures, I can recommend &lt;a href=&quot;https://www.youtube.com/watch?v=-3ywrgCA-1I&quot;&gt;this video by Toastmasters&lt;/a&gt; (but there are tons of others). So take it serious how you present yourself, because this definitely makes a difference. And one related note about it: even if you think your audience might have noticed something which you did wrong, simply continue with your talk. Your point of view is always different and you see your own performance more negative as it actually is.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Now, let me quickly summarize my first talk experience on a bigger stage: it was fun to do and I will definitely do it again in the future :-) Even though the preparation takes time, it&apos;s worth the effort! Thanks to all who have helped me on the way!&lt;/p&gt;
</content:encoded></item><item><title>Shut down CloudFormation stack resources over night using AWS Lambda</title><link>https://www.sebastianhesse.de/2018/04/22/shut-down-cloudformation-stack-resources-over-night-using-aws-lambda/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2018/04/22/shut-down-cloudformation-stack-resources-over-night-using-aws-lambda/</guid><description>Save AWS costs by automatically shutting down development stack resources overnight using Lambda functions and CloudFormation parameters. Reduce expenses by 50% or more.</description><pubDate>Sun, 22 Apr 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A CloudFormation stack evolves over time and usually costs increase as well. You&apos;ll probably not only have one stack, but instead have at least a production and a development stack. Even one development stack per developer might be common in your organization. This means the total costs increase even more. In order to prevent paying for idle resources, you can shutdown CloudFormation stack resources over night to save costs. A nice option is AWS Lambda here. You can schedule a Lambda function to stop or start resources after or before your working day. This blog post describes the steps to accomplish such a setup to decrease costs for developer stacks.&lt;/p&gt;
&lt;h2&gt;Identify the expensive resources&lt;/h2&gt;
&lt;p&gt;Before starting to decrease costs, you should be aware of the type of your costs. I already wrote another article about &lt;a href=&quot;/2018/01/15/keeping-aws-budget-control/&quot;&gt;keeping your AWS budget under control&lt;/a&gt; and this blog post extends my initial approach. In short, you need to identify the resources and services which are consuming the biggest part of of your bill. A first step is to head over to your last bills in your &lt;strong&gt;Billing Management Console&lt;/strong&gt; and investigate the most expensive services. In a developer stack you&apos;ll probably see services like EC2, Fargate or others like provisioned DynamoDB tables, Kinesis, and more. These services are always on, i.e. they just run and cost you money if you don&apos;t stop them. So, we&apos;ll have to find a way to shut down these resources or at least try to reduce their costs.&lt;/p&gt;
&lt;h2&gt;Consider different pricing models&lt;/h2&gt;
&lt;p&gt;As a next step you have to consider that each service is using a different pricing model. Hence, you have to use a different approach for each resource. For example, EC2 pricing is based on an hourly pricing (or per second, depends on the instances type). In order to save costs, you&apos;d have to shut down the instance completely. As a counterexample, DynamoDB&apos;s pricing model is based on provisioned read and write capacity and how much data you&apos;ve stored in there. Let&apos;s ignore the amount of stored data for now as developer stacks usually don&apos;t contain a lot data. Then, the costs are mainly driven by the provisioned read and write capacity. Here you can choose between scaling down the provisioned capacities or by removing the DynamoDB resources completely (see advantages/disadvantages below). Similar for Kinesis, you either decrease the shards of your Kinesis stream or you remove the Kinesis resources from your stack.&lt;/p&gt;
&lt;h2&gt;Basic setup&lt;/h2&gt;
&lt;p&gt;Now that you know the expensive resources and how you can approach your cost savings, you need to reduce the costs now. You could either do that by manually shutting down resources after each working day and restarting them again before the beginning of your working day. This is a tedious approach and I suggest to automate that. AWS Lambda is a perfect solution here: you can create a Lambda function which shuts down and starts up resources each day. Then, a &lt;a href=&quot;https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html&quot;&gt;scheduled event&lt;/a&gt; based on a cron expression can trigger the function after you&apos;ve finished work and before you start your working day the next day. A good cron expression is considering your working days, for example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# shutdown schedule: 6pm each weekday
cron(0 18 ? * MON-FRI *)

# startup schedule: 6am each weekday
cron(0 6 ? * MON-FRI *)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Select a shutdown approach&lt;/h2&gt;
&lt;p&gt;The basic setup sounds easy. You have a CloudFormation stack to manage your resources and a Lambda function to start/stop them. But how do you actually shut down resources considering the different pricing models? And how do you make sure that they&apos;re started again the next day? In principle you can choose between one of the following approaches:&lt;/p&gt;
&lt;h4&gt;Delete complete stack&lt;/h4&gt;
&lt;p&gt;The most basic approach is to delete the whole CloudFormation stack and build it up again each day. This is fairly easy but reveals a few drawbacks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It can take &lt;strong&gt;a long time to shutdown&lt;/strong&gt; your resources. For example, if you use CloudFront, certain initialization steps can take about 30 or more minutes. (This is not true anymore because &lt;a href=&quot;https://aws.amazon.com/blogs/networking-and-content-delivery/slashing-cloudfront-change-propagation-times-in-2020-recent-changes-and-looking-forward/&quot;&gt;CloudFront drastically improved the time to create/update a distribution&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;You &lt;strong&gt;can lose data&lt;/strong&gt;. As an example, let&apos;s consider you&apos;re using DynamoDB tables containing data for your test environment. Now you remove the instances in the evening. Your data will be deleted. That means you have to restore the data somehow which usually takes time. Of course, there are backup or recovery options (and also automatic database backup options) to do that. But it requires more work or time and backups aren&apos;t free as well.&lt;/li&gt;
&lt;li&gt;You &lt;strong&gt;can lose S3 bucket names&lt;/strong&gt;, because &lt;a href=&quot;https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html&quot;&gt;AWS does not guarantee that you can reuse them&lt;/a&gt;. You can minimize the chances to run into this by prefixing your buckets with e.g. a unique name. But you&apos;re not 100% safe!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Of course, a good reason to use this approach is because its&apos;s easy. However, due to the drawbacks, let&apos;s look at further ways to shut down the resources.&lt;/p&gt;
&lt;h4&gt;Use parameters to control resources scalings&lt;/h4&gt;
&lt;p&gt;Another approach is to use parameters within your CloudFormation template. (It&apos;s similar to using parameters to create &lt;a href=&quot;/2018/02/03/creating-different-aws-cloudformation-environments/&quot;&gt;multiple stack environments&lt;/a&gt;) For example, you can provide parameters to set the provisioned read or write capacity for your &lt;strong&gt;DynamoDB tables&lt;/strong&gt;. In this case you&apos;d just make a stack update to your CloudFormation stack by providing new parameters. Then CloudFormation will take care of scaling down your resources. This also works for resources like &lt;strong&gt;AutoScalingGroup&apos;s&lt;/strong&gt; or &lt;strong&gt;Kinesis&lt;/strong&gt;. See some example code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AWSTemplateFormatVersion: &apos;2010-09-09&apos;
Description: &apos;Template with parameters for DynamoDB provisioned table&apos;

Parameters:
  ProvisionedWriteCapacity:
    Description: &apos;Provisioned write capacity&apos;
    Type: String
  ProvisionedReadCapacity:
    Description: &apos;Provisioned read capacity&apos;
    Type: String

Resources: 
  MyDynamoDBTable: 
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - AttributeName: &quot;Id&quot;
          AttributeType: &quot;S&quot;
      KeySchema: 
        - AttributeName: &quot;Id&quot;
          KeyType: &quot;HASH&quot;
      ProvisionedThroughput: 
        ReadCapacityUnits: !Ref ProvisionedReadCapacity
        WriteCapacityUnits: !Ref ProvisionedWriteCapacity
      TableName: &quot;MyTableName&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can use a Lambda function and update the stack by providing new values for your CloudFormation stack parameters. Calling an update on a stack isn&apos;t a big deal but there is a surprise here: since CloudFormation will start/stop resources on your behalf, you need to make sure to set the appropriate permissions in your Lambda function&apos;s role policy as well. I won&apos;t cover this in detail now but you can use &lt;a href=&quot;https://aws.amazon.com/cloudtrail/&quot;&gt;CloudTrail&lt;/a&gt; to identify the necessary permissions.&lt;/p&gt;
&lt;p&gt;The drawback of this approach is that it can&apos;t be used for all resources. For example, if you&apos;re using single EC2 instances in your CloudFormation (sounds like a bad idea in general but there might be good use cases), you can&apos;t use a parameter like above. In this case you need to use conditions which are discussed below.&lt;/p&gt;
&lt;h4&gt;Use conditions to remove/add resources&lt;/h4&gt;
&lt;p&gt;You can extend the parameter approach by introducing &lt;a href=&quot;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html&quot;&gt;conditions to your CloudFormation template&lt;/a&gt;. Conditions are evaluated on the template parameters and can be placed on stack resources. Only if a condition evaluates to &quot;true&quot;, a resource is created. For example, you could add a parameter like &quot;OverNightShutdown&quot; to your template. Then, you can add a condition evaluating if this parameter is &quot;false&quot;. If this evaluation is true, the resource is created. Otherwise CloudFormation won&apos;t create or in case of a shutdown even remove it. Here&apos;s an example for using conditions:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AWSTemplateFormatVersion: &apos;2010-09-09&apos;
Description: &apos;Template with conditions to shutdown resources&apos;

Parameters:
  OverNightShutdown:
    Description: &apos;Indicates if certain resources should be shutdown overnight&apos;
    Type: String

Conditions:
  IsShutdownResource: !Equals [!Ref OverNightShutdown, &apos;true&apos;]

Resources:
  MyDynamoDBTable:
    Type: AWS::DynamoDB::Table
    Condition: IsShutdownResource
    Properties:
      # ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Unfortunately, this approach also has a few drawbacks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If you remove a resource, you have to consider the dependencies within your template (or maybe even outside of your template). If you&apos;d like to shut down resources, you also have to shut down all resources which are referencing those. This can get tricky and you might even consider to remove the whole stack if you have to remove 90% of the resources due to their dependencies. (Or don&apos;t use this condition approach at all)&lt;/li&gt;
&lt;li&gt;If you&apos;re using the Serverless Application Model (SAM) this approach might not work properly, because resources of type &lt;em&gt;AWS::Serverless::Function&lt;/em&gt; don&apos;t support conditions yet (support might be added in the next months). This is especially bad in combination with the previous drawback regarding the resource dependencies. It&apos;s now supported, see &lt;a href=&quot;https://github.com/awslabs/serverless-application-model/issues/142&quot;&gt;issue #142 in the SAM GitHub project&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;As you can see, there is no one solution. You might be even considering a mix of the approaches, e.g. using parameters and conditions to shutdown instances or provisioned capacities. That&apos;s what we actually did in a recent project. Whatever you choose, please follow this general rule: never remove managed resources manually from your managed stack. You will very likely run into problems!&lt;/p&gt;
</content:encoded></item><item><title>Creating Different Environments With AWS CloudFormation</title><link>https://www.sebastianhesse.de/2018/02/03/creating-different-aws-cloudformation-environments/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2018/02/03/creating-different-aws-cloudformation-environments/</guid><description>Create separate dev, staging, and production environments with AWS CloudFormation using parameters and deploy scripts. Learn the naming strategy to keep resources organized.</description><pubDate>Sat, 03 Feb 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently, a question on &lt;a href=&quot;https://stackoverflow.com/questions/48386903/aws-deploying-environment-and-create-environments-for-dev-and-prod&quot;&gt;stackoverflow.com&lt;/a&gt; popped up which asked for different environments with AWS CloudFormation. Here, I want to present my answer and give some more information about this topic. The code for this blog post can be found in my &lt;a href=&quot;https://github.com/seeebiii/aws-cloudformation-templates/tree/master/08-CloudFormation-Environments&quot;&gt;GitHub repository&lt;/a&gt; where I also have some more &lt;a href=&quot;https://github.com/seeebiii/aws-cloudformation-templates&quot;&gt;CloudFormation examples&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Different Environments&lt;/h2&gt;
&lt;p&gt;Before we start, let&apos;s define what &quot;different environments&quot; mean. When developing software, you typically have multiple stages for your software product:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;a development stage:&lt;/strong&gt; reflects your current state of development and might be broken at some points&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;a pre-production stage&lt;/strong&gt;: very similar to production stage (in the best case identical) to test things on a production-like system before going live&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;a production stage:&lt;/strong&gt; contains the version of your code which is &quot;live&quot; and actually used by customers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A few years ago where technologies like AWS CloudFormation or even Docker weren&apos;t available, developers created such environments manually. Sometimes they used scripts to automate certain steps. However, they often faced the problem that the stages were not similar enough. Hence sometimes errors and bugs were only detected after a deployment to production - which is often too late. Services like CloudFormation can reduce this problem if used correctly.&lt;/p&gt;
&lt;h2&gt;Advantages Of Using CloudFormation&lt;/h2&gt;
&lt;p&gt;To avoid problems like different stages, you can use template files and a service like CloudFormation. Template files contain the definition of your stack. CloudFormation reads these files and creates the resources based on your definition. Automatically. With the same output every time*. That&apos;s the biggest advantage. &lt;a href=&quot;http://blog.linkeit.com/en/what-are-the-main-benefits-of-aws-cloudformation&quot;&gt;But there are more&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Creating Environments with CloudFormation&lt;/h2&gt;
&lt;p&gt;Let&apos;s see how you can use CloudFormation to create different environments. Basically, you need to &lt;strong&gt;parameterize your stack name and stack resources&lt;/strong&gt;. To achieve this, I follow the naming structure &lt;em&gt;[project]-[env]-[resource]&lt;/em&gt;, e.g. &lt;em&gt;hello-world-dev-my-bucket&lt;/em&gt;. The following code shows an example template where the bucket name is parameterized:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AWSTemplateFormatVersion: &apos;2010-09-09&apos;
Transform: AWS::Serverless-2016-10-31
Description: Deploys a simple AWS Lambda using different environments.

Parameters:
  Env:
    Type: String
    Description: The environment you&apos;re deploying to.

Resources:
  ServerlessFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs6.10
      CodeUri: ./
      Policies:
        - AWSLambdaBasicExecutionRole

  MyBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub &apos;my-bucket-name-${Env}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should do that with all of your resources. This helps you to identify them, e.g. when using the AWS Console. As you can see here, I didn&apos;t do it for the Lambda function as AWS is doing it automatically for me. But of course, you can add your naming strategy here as well.&lt;/p&gt;
&lt;h2&gt;Using a Simple Deploy Script&lt;/h2&gt;
&lt;p&gt;In the second step, we create a small deploy script which will set a parameterized stack name. This is important as without a parameter we would just change and update the same stack again and again.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/usr/bin/env bash

LAMBDA\_BUCKET=&quot;Your-S3-Bucket-Name&quot;
# change this ENV variable depending on the environment you want to deploy
ENV=&quot;prd&quot;
STACK\_NAME=&quot;aws-lambda-cf-environments-${ENV}&quot;

# now package the CloudFormation template, upload SAM artifacts to S3 and deploy it
aws cloudformation package --template-file cfn.yml --s3-bucket ${LAMBDA\_BUCKET} --output-template-file cfn.packaged.yml
aws cloudformation deploy --template-file cfn.packaged.yml --stack-name ${STACK\_NAME} --capabilities CAPABILITY\_IAM --parameter-overrides Env=${ENV}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can now try to deploy the script or enhance it, e.g. by reading the environment parameter from a script parameter. Whatever you do, make sure that you keep it easy and don&apos;t exceed the &lt;a href=&quot;/2017/08/12/reduce-cloudformation-template-size/&quot;&gt;maximum size for CloudFormation templates&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;* Well, &quot;every time&quot; is not true. Things go wrong and so do software programs.&lt;/p&gt;
</content:encoded></item><item><title>Keeping your AWS budget under control</title><link>https://www.sebastianhesse.de/2018/01/15/keeping-aws-budget-control/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2018/01/15/keeping-aws-budget-control/</guid><description>Control AWS spending with AWS Budgets. Set up cost alerts and forecasted thresholds to prevent surprise charges. Simple 5-minute setup saves you from overspending.</description><pubDate>Mon, 15 Jan 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you use AWS for your private website, tools or just for testing new things, you will get to the point where you&apos;re not sure how much money your resources cost you. AWS has a small tool for that: AWS Budgets. With this tool you can keep your AWS budget under control and get notified if it exceeds your limit. Combined with proper &lt;a href=&quot;/2018/02/03/creating-different-aws-cloudformation-environments/&quot;&gt;environment management for your CloudFormation stacks&lt;/a&gt;, you can effectively control costs across multiple environments.&lt;/p&gt;
&lt;h2&gt;Features&lt;/h2&gt;
&lt;p&gt;The following main features are supported by AWS Budgets:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Filter costs on tags, services, etc.&lt;/li&gt;
&lt;li&gt;Observe real costs or forecasted costs&lt;/li&gt;
&lt;li&gt;Get notifications if your resources exceed your limit&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How to setup AWS Budgets&lt;/h2&gt;
&lt;p&gt;Go to your &lt;strong&gt;Billings Dashboard&lt;/strong&gt;, click left on &lt;strong&gt;Budgets&lt;/strong&gt; and then create a new budget. You will see a wizard that guides you through the steps.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-budget-type.png&quot; alt=&quot;AWS Budget setup screen showing cost budget type selection&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Step 1: Setup a cost budget&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-budget-name-period.png&quot; alt=&quot;AWS Budget configuration screen for naming budget and selecting time period&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Step 2.1: Give it a name and select period&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-budget-amount.png&quot; alt=&quot;AWS Budget configuration screen for specifying budget amount&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Step 2.2: Specify your budget amount&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/aws-budget-thresholds.png&quot; alt=&quot;AWS Budget threshold and notification settings configuration screen&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Step 3: Define the threshold and notification settings.&lt;/p&gt;
&lt;p&gt;Notifications are really helpful if you&apos;re not constantly visiting your billing dashboard. &lt;strong&gt;Hint:&lt;/strong&gt; use a forecasted value for your threshold, so you&apos;ll get a message (hopefully) early enough before it&apos;s too late. For more proactive cost reduction, consider &lt;a href=&quot;/2018/04/22/shut-down-cloudformation-stack-resources-over-night-using-aws-lambda/&quot;&gt;automatically shutting down expensive CloudFormation stack resources&lt;/a&gt; overnight when they&apos;re not needed.&lt;/p&gt;
&lt;p&gt;That&apos;s it already! This small feature can save you real money in case you forget to shutdown instances the next time. For even more cost savings, &lt;a href=&quot;/2018/04/22/shut-down-cloudformation-stack-resources-over-night-using-aws-lambda/&quot;&gt;automate the shutdown of your development resources&lt;/a&gt; during off-hours.&lt;/p&gt;
</content:encoded></item><item><title>Use Jersey and Spring in AWS Lambda</title><link>https://www.sebastianhesse.de/2017/08/27/use-jersey-spring-aws-lambda/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2017/08/27/use-jersey-spring-aws-lambda/</guid><description>Integrate Jersey and Spring Framework in AWS Lambda using aws-serverless-java-container. Complete guide with code examples for building REST APIs on Lambda with Java.</description><pubDate>Sun, 27 Aug 2017 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;AWS Lambda is actually made to be used by implementing small functions which can be started quickly. So your code artifact should be as small as possible for a fast startup time. However, in the Java world there are nice frameworks like &lt;a href=&quot;https://jersey.github.io&quot;&gt;Jersey&lt;/a&gt; and &lt;a href=&quot;https://spring.io/&quot;&gt;Spring&lt;/a&gt; which can help you writing code for an API a lot! Unfortunately these frameworks can take up to a few MB and blow up your artifact, but you might have your reasons to use them in AWS Lambda, e.g. because you&apos;re migrating an existing project to AWS Lambda. So let&apos;s see, how you can use Jersey and Spring together in AWS Lambda! The code can be found in my &lt;a href=&quot;https://github.com/seeebiii/lambda-jersey-spring-example&quot;&gt;GitHub repository lambda-jersey-spring-example&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A good starting point is the &lt;a href=&quot;https://github.com/awslabs/aws-serverless-java-container&quot;&gt;aws-serverless-java-container project on GitHub&lt;/a&gt;. It provides Maven modules to support Jersey, Spring and Spark framework. Nice, so let&apos;s get started and include the relevant Maven dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.amazonaws&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;aws-lambda-java-core&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.1.0&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.amazonaws&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;aws-lambda-java-log4j&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.0.0&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.amazonaws.serverless&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;aws-serverless-java-container-jersey&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;0.7&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.amazonaws.serverless&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;aws-serverless-java-container-spring&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;0.7&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(As you can see, this also includes the dependencies to write AWS Lambda functions in Java)&lt;/p&gt;
&lt;h2&gt;Add a Request Handler&lt;/h2&gt;
&lt;p&gt;The next step is to add a Lambda function using &lt;code&gt;RequestHandler&lt;/code&gt; interface of &lt;code&gt;aws-lambda-java-core&lt;/code&gt; from &lt;a href=&quot;https://github.com/aws/aws-lambda-java-libs&quot;&gt;aws-lambda-java-libs on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;package de.sebastianhesse.aws.examples;

import com.amazonaws.serverless.proxy.internal.model.AwsProxyRequest;
import com.amazonaws.serverless.proxy.internal.model.AwsProxyResponse;
import com.amazonaws.serverless.proxy.jersey.JerseyLambdaContainerHandler;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

/**
 * A request handler loading a Spring context and using spring supported Jersey resources.
 **/
public class JerseySpringHandler implements RequestHandler&amp;lt;AwsProxyRequest, AwsProxyResponse&amp;gt; {

    private JerseyLambdaContainerHandler&amp;lt;AwsProxyRequest, AwsProxyResponse&amp;gt; handler;

    public JerseySpringHandler() {
        // create Spring context
        AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext();
        context.register(SpringConfig.class);
        context.refresh();

        // use Spring bean of JerseyResourceConfig to have spring supported resources
        JerseyResourceConfig resourceConfig = context.getBean(JerseyResourceConfig.class);
        handler = JerseyLambdaContainerHandler.getAwsProxyHandler(resourceConfig);
    }

    public AwsProxyResponse handleRequest(AwsProxyRequest awsProxyRequest, Context context) {
        return handler.proxy(awsProxyRequest, context);
    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is the &lt;strong&gt;most important&lt;/strong&gt; part where Jersey and Spring are glued together. So, what&apos;s done here? There are three simple things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A Spring context is created using an annotation based config.&lt;/li&gt;
&lt;li&gt;A bean of a custom Jersey configuration class &lt;code&gt;JerseyResourceConfig&lt;/code&gt; is retrieved from the Spring context. The class registers the actual Jersey resources (see below) which are also available in the Spring context.&lt;/li&gt;
&lt;li&gt;The Jersey configuration bean is used to handle all incoming requests.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Configure Jersey and Spring&lt;/h2&gt;
&lt;p&gt;Now, let&apos;s take a look at the &lt;code&gt;JerseyResourceConfig&lt;/code&gt; and how the resources are registered:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;package de.sebastianhesse.aws.examples;

import de.sebastianhesse.aws.examples.jersey.TestOneResource;
import de.sebastianhesse.aws.examples.jersey.TestTwoResource;
import org.glassfish.jersey.server.ResourceConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;

/**
 * A Jersey config (registered as a Spring bean) which puts the Spring context into the properties.
 */
@Component
public class JerseyResourceConfig extends ResourceConfig {

    @Autowired TestOneResource oneResource;
    @Autowired TestTwoResource twoResource;

    @PostConstruct
    public void init() {
        // register spring supported resources
        register(oneResource);
        register(twoResource);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Quite simple, isn&apos;t it? Ok, so the following snippets show the code for a simple Spring annotation config and one of the sample Jersey resources.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;package de.sebastianhesse.aws.examples;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;

/**
 * Annotation based Spring configuration.
 */
@Configuration
@ComponentScan(&quot;de.sebastianhesse.aws.examples&quot;)
public class SpringConfig {

}

package de.sebastianhesse.aws.examples.jersey;

import de.sebastianhesse.aws.examples.DefaultService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

/**
 * A simple Jersey resource using Spring.
 */
@Path(&quot;/one&quot;)
@Service
public class TestOneResource {

    @Autowired DefaultService service;

    @GET
    @Produces(MediaType.TEXT\_PLAIN)
    public Response simpleGet() {
        return Response.ok(&quot;Resource Number One: &quot; + service.getFoo()).build();
    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Nothing special here as you can see. It&apos;s just using another &lt;code&gt;DefaultService&lt;/code&gt; to prove that autowiring a bean works as expected.&lt;/p&gt;
&lt;h2&gt;Automate The Steps Using CloudFormation&lt;/h2&gt;
&lt;p&gt;In order to complete the example, the following listing shows you a sample YAML CloudFormation configuration for the Lambda function.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AWSTemplateFormatVersion: &apos;2010-09-09&apos;
Transform: AWS::Serverless-2016-10-31
Description: Example about how to use Jersey and Spring together in AWS Lambda.

Resources:
  JerseySpringHandler:
    Type: AWS::Serverless::Function
    Properties:
      Handler: de.sebastianhesse.aws.examples.JerseySpringHandler
      Runtime: java8
      MemorySize: 320
      Timeout: 60
      CodeUri: target/lambda-jersey-spring-example-1.0.0.jar
      Policies: AWSLambdaBasicExecutionRole
      Events:
        JerseySpringProxy:
          Type: Api
          Properties:
            Path: /{proxy+}
            Method: any
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Please consider that in case you&apos;re using another Api path for your Lambda function, e.g. &lt;code&gt;/api/{proxy+}&lt;/code&gt;, you must call &lt;code&gt;handler.setBasePath(&quot;/api&quot;)&lt;/code&gt; in &lt;code&gt;JerseySpringHandler&lt;/code&gt; so that the path matching will work for Jersey. Another point worth noting is that such a simple example is using about 12 MB for the target JAR file. This is huge compared to what you get when accomplishing the same using Node.js, so use these frameworks with care! In my opinion you don&apos;t need Spring or Jersey if you&apos;re developing a Lambda function. But I understand the desire to use it 😀&lt;/p&gt;
&lt;p&gt;That&apos;s it! You&apos;re done and can add more Jersey resources if you have to. If you are interested in more AWS Lambda content, please take a look at &lt;a href=&quot;/2017/08/01/starter-projects-for-aws-lambda-using-nodejs-and-java/&quot;&gt;other examples about AWS Lambda&lt;/a&gt; and &lt;a href=&quot;https://www.sebastianhesse.de/2017/06/24/5-things-consider-writing-lambda-function/&quot;&gt;my top 5 tips on writing a Lambda function&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>How to reduce your CloudFormation template size</title><link>https://www.sebastianhesse.de/2017/08/12/reduce-cloudformation-template-size/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2017/08/12/reduce-cloudformation-template-size/</guid><description>Overcome the 51,200 byte CloudFormation template limit using AWS::Include and cfn-include preprocessor. Split large templates into modular reusable components.</description><pubDate>Sat, 12 Aug 2017 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently, I came across a limit which I haven&apos;t known before: CloudFormation just allows a &lt;strong&gt;maximum size of 51,200 bytes per template&lt;/strong&gt;. When using &lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;Infrastructure as Code&lt;/a&gt; for your serverless applications, you might encounter this limit as your stack grows. If you have ever reached this limit, you might have encountered this error message:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[YOUR_TEMPLATE_CODE] at &apos;templateBody&apos; failed to satisfy constraint: Member must have length less than or equal to 51200
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So, what are the possible solutions to &lt;strong&gt;reduce a CloudFormation template size&lt;/strong&gt;? In my opinion, there are two relatively easy solutions to overcome this problem: using a preprocessor and/or using AWS::Include.&lt;/p&gt;
&lt;h2&gt;Use a Preprocessor&lt;/h2&gt;
&lt;p&gt;There are some CloudFormation template preprocessors available. One of them is &lt;a href=&quot;https://www.npmjs.com/package/cfn-include&quot;&gt;cfn-include&lt;/a&gt; (a CLI tool) which does a good job in my opinion: it reads your template file (JSON or YAML) and can produce a minified JSON template file. Using this tool, it can reduce your template size e.g. from 50 kb to 40 kb (Note: it always depends on the content you have). Here is a small example how to use it:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# -m =&amp;gt; minify JSON output
cfn-include path/to/cfn.yml -m &amp;gt; output.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a first and easy step to come around the size limitation, but won&apos;t save you forever. Other preprocessor alternatives are &lt;a href=&quot;https://github.com/AOEpeople/StackFormation&quot;&gt;StackFormation&lt;/a&gt; or &lt;a href=&quot;https://github.com/mozilla/awsboxen&quot;&gt;awsboxen&lt;/a&gt;. You might want to check them out as well!&lt;/p&gt;
&lt;h2&gt;Use AWS::Include&lt;/h2&gt;
&lt;p&gt;Another option would be to use the new &lt;a href=&quot;http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/create-reusable-transform-function-snippets-and-add-to-your-template-with-aws-include-transform.html&quot;&gt;AWS::Include command&lt;/a&gt;. Using such a simple snippet like below, you can import another template:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;###################################
#### main template&apos;s content ######
###################################

AWSTemplateFormatVersion: &apos;2010-09-09&apos;
Transform: AWS::Serverless-2016-10-31
Description: A AWS::Include test.

Resources:
  MainFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs6.10
      CodeUri: ./
      Policies:
        - AWSLambdaBasicExecutionRole

  &apos;Fn::Transform&apos;:
    Name: &apos;AWS::Include&apos;
    Parameters:
      Location: s3://BUCKET\_NAME/key/to/included-template.yml

############################
#### included template #####
############################

SNSTopic:
  Type: AWS::SNS::Topic
  Properties:
    DisplayName: MySNSTopic
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s important to note that you &lt;strong&gt;must only enter&lt;/strong&gt; the actual parts in the included template which you want to include. &lt;strong&gt;Nothing&lt;/strong&gt; else like parameter declarations, outputs, etc. You have to add such things to the main template! As you also might derive from the code example, all included templates must be uploaded to S3 first. Uploading it first adds some more work around a stack deployment. This can be automated in a deployment script, so it shouldn&apos;t be a big deal. Here is an example on how to achieve this using a certain parameter to hand over the S3 Url:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# these commands expect that your main template file is cfn.yml and
# your included template file is cfn-include1.yml;

# upload included template file to S3 bucket; bucket name is hold in LAMBDA\_BUCKET,
# because I also have a Lambda SAM function in the template above.
# (of course you can the bucket if you want, just replace LAMBDA\_BUCKET)
aws s3 cp cfn-include1.yml s3://${LAMBDA_BUCKET}/templates/cfn-include1.yml

# save the URL of the uploaded template
INCLUDE_URL=&quot;s3://${LAMBDA_BUCKET}/templates/cfn-include1.yml&quot;

# now package the main template, upload SAM artifacts to S3 and deploy it;
aws cloudformation package --template-file cfn.yml --s3-bucket ${LAMBDA_BUCKET} --output-template-file cfn.packaged.yml

# important here: you have to hand over the INCLUDE\_URL;
# later in the template, you can reference it using &quot;!Ref IncludeUrl&quot;
aws cloudformation deploy --template-file cfn.packaged.yml --stack-name ${STACK_NAME} --capabilities CAPABILITY_IAM --parameter-overrides IncludeUrl=${INCLUDE_URL}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, when deploying the CloudFormation stack, CloudFormation resolves the included templates by downloading them from S3 and inserts them directly into your main template. Each of the included ones (and of course the main template as well) may not exceed the aforementioned byte size limit, otherwise this will fail again.&lt;/p&gt;
&lt;h3&gt;Disadvantages of AWS::Include&lt;/h3&gt;
&lt;p&gt;Although it seems to be nice that such kind of import is possible, it comes with a few drawbacks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the beginning, I did not know that the included template may not contain the short version of an &lt;a href=&quot;http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html&quot;&gt;intrinsic function&lt;/a&gt;. This was an issue for me, because I had to change all functions to use the full notation.&lt;/li&gt;
&lt;li&gt;If you&apos;re using a code completion plugin for CloudFormation templates like I do for IntelliJ, then you&apos;ll probably see many errors in your templates, because it can&apos;t resolve parameters or other stack resources anymore. That&apos;s not a problem of &lt;code&gt;AWS::Include&lt;/code&gt; directly. But the point is you&apos;re bringing in some more complexity into your stack setup. So keep in mind that you structure your code and include templates in a good way.&lt;/li&gt;
&lt;li&gt;You can&apos;t export parameters into an include template.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a more detailed view on this, please read this excellent blog post: &lt;a href=&quot;https://thomasvachon.com/articles/making-modular-cloudformation-with-includes/&quot;&gt;https://thomasvachon.com/articles/making-modular-cloudformation-with-includes/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Interesting to know: AFAIK you can not outsource Lambda functions using &lt;a href=&quot;https://github.com/awslabs/serverless-application-model&quot;&gt;Serverless Application Model (SAM)&lt;/a&gt; into separate templates. The reason is that they need to be transformed first. For more details on configuring &lt;a href=&quot;/2017/06/24/5-things-consider-writing-lambda-function/&quot;&gt;Lambda functions in SAM templates&lt;/a&gt;, check out my guide on essential Lambda considerations.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Finally, I want to say that I prefer the AWS::Include solution, because I think you just postpone the problem by using a preprocessor. At some point, you will reach the limit again and then you need to change an even bigger stack. So try to be prepared for this problem by building a modular stack from the beginning! This approach works particularly well when &lt;a href=&quot;/2018/02/03/creating-different-aws-cloudformation-environments/&quot;&gt;creating different CloudFormation environments&lt;/a&gt;, allowing you to reuse template components across development, staging, and production stacks. You can also &lt;a href=&quot;/2018/04/22/shut-down-cloudformation-stack-resources-over-night-using-aws-lambda/&quot;&gt;manage your CloudFormation stacks efficiently&lt;/a&gt; to control costs. Also take a look at my GitHub repository &lt;a href=&quot;https://github.com/seeebiii/aws-cloudformation-templates&quot;&gt;aws-cloudformation-templates&lt;/a&gt; where I have included an example using &lt;code&gt;AWS::Include&lt;/code&gt; to reduce your CloudFormation template size.&lt;/p&gt;
</content:encoded></item><item><title>Starter Projects For AWS Lambda Using NodeJS And Java</title><link>https://www.sebastianhesse.de/2017/08/01/starter-projects-for-aws-lambda-using-nodejs-and-java/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2017/08/01/starter-projects-for-aws-lambda-using-nodejs-and-java/</guid><description>AWS Lambda starter projects for NodeJS and Java with CloudFormation templates. Get boilerplate code to quickly deploy Lambda functions with API Gateway integration.</description><pubDate>Tue, 01 Aug 2017 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Today I want to show you three starter projects for &lt;a href=&quot;https://aws.amazon.com/lambda/&quot;&gt;AWS Lambda&lt;/a&gt; using &lt;a href=&quot;https://aws.amazon.com/cloudformation/&quot;&gt;CloudFormation&lt;/a&gt; and &lt;a href=&quot;https://github.com/awslabs/serverless-application-model&quot;&gt;SAM - Serverless Application Model&lt;/a&gt;. If you&apos;re new to Lambda, check out my guide on &lt;a href=&quot;/2017/06/24/5-things-consider-writing-lambda-function/&quot;&gt;important things to consider when developing Lambda functions&lt;/a&gt; and the benefits of &lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;infrastructure as code for serverless applications&lt;/a&gt;. I always like if I have some boilerplate code and can get started quickly without copying code or project structures from an existing (and mature) project. Therefore I thought it&apos;s good to have them in one repository. You can &lt;a href=&quot;https://github.com/seeebiii/aws-lambda-boilerplate&quot;&gt;find them on GitHub&lt;/a&gt;. The projects can be used for NodeJS and Java. Also one project contains both: usage of Java and NodeJS Lambdas in one CloudFormation template.&lt;/p&gt;
&lt;p&gt;Note: All projects contain a &lt;code&gt;deploy.sh&lt;/code&gt; file which you can run. Each deploy file is using the AWS CLI to package and deploy the CloudFormation template using &lt;code&gt;aws cloudformation package&lt;/code&gt; and &lt;code&gt;aws cloudformation deploy&lt;/code&gt;, so you first have to add an existing S3 bucket (where you have access to!) to &lt;code&gt;deploy.sh&lt;/code&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NodeJS Starter Project:&lt;/strong&gt; Spins up a simple NodeJS Lambda function which is available under the Api path &lt;code&gt;/hello&lt;/code&gt;. &lt;a href=&quot;https://github.com/seeebiii/aws-lambda-boilerplate/tree/master/aws-lambda-node-starter&quot;&gt;Link&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Java Starter Project:&lt;/strong&gt; Spins up a simple Java Lambda function using the &lt;a href=&quot;/2017/08/27/use-jersey-spring-aws-lambda/&quot;&gt;&lt;code&gt;RequestStreamHandler&lt;/code&gt; interface&lt;/a&gt;. The project has added &lt;a href=&quot;https://github.com/FasterXML/jackson&quot;&gt;Jackson&lt;/a&gt; to parse input and output from/to a stream. &lt;a href=&quot;https://github.com/seeebiii/aws-lambda-boilerplate/tree/master/aws-lambda-java-starter&quot;&gt;Link&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NodeJS and Java Starter Project:&lt;/strong&gt; Spins up two simple AWS Lambda functions: one for NodeJS and one for Java. The neat point is here that it&apos;s possible to declare both AWS Lambda functions for NodeJS and Java in one CloudFormation template. You just have to use different values for &lt;code&gt;Handler&lt;/code&gt;, &lt;code&gt;Runtime&lt;/code&gt; and &lt;code&gt;CodeUri&lt;/code&gt;. In the background, the AWS CLI packages and uploads both artifacts to S3. The functions are available under the Api path &lt;code&gt;/node&lt;/code&gt; and &lt;code&gt;/java&lt;/code&gt;. For more modern bundling approaches with AWS CDK, check out my guide on &lt;a href=&quot;/2021/01/16/5-ways-to-bundle-a-lambda-function-within-an-aws-cdk-construct/&quot;&gt;bundling Lambda functions within CDK constructs&lt;/a&gt;. &lt;a href=&quot;https://github.com/seeebiii/aws-lambda-boilerplate/tree/master/aws-lambda-node-java-starter&quot;&gt;Link&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example code how to use NodeJS and Java in one project:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Declare other stuff...

Resources:
  NodeFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs6.10
      Timeout: 10
      CodeUri: ./node-backend/target
      Policies:
        - AWSLambdaBasicExecutionRole
      Events:
        GetNodeResource:
          Type: Api
          Properties:
            Path: /node
            Method: get

  JavaFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: de.sebastianhesse.examples.JavaHelloWorldHandler
      Runtime: java8
      Timeout: 10
      CodeUri: ./java-backend/target/target.jar
      Policies:
        - AWSLambdaBasicExecutionRole
      Events:
        GetJavaResource:
          Type: Api
          Properties:
            Path: /java
            Method: get
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Fast AWS Lambda Code Updates And Improved Lambda Logs</title><link>https://www.sebastianhesse.de/2017/07/11/fast-aws-lambda-code-updates-and-improved-lambda-logs/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2017/07/11/fast-aws-lambda-code-updates-and-improved-lambda-logs/</guid><description>Speed up AWS Lambda code updates from 2.5 minutes to 15 seconds with lambda-updater. Plus, easily trace logs across multiple Lambda functions using lambdalogs tool.</description><pubDate>Tue, 11 Jul 2017 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This year I&apos;ve started working with Amazon Web Services (AWS) and most notably &lt;a href=&quot;https://aws.amazon.com/lambda/&quot;&gt;AWS Lambda&lt;/a&gt;. It&apos;s awesome what Amazon is providing here! It&apos;s &lt;a href=&quot;https://aws.amazon.com/lambda/pricing/&quot;&gt;cheap&lt;/a&gt;, easy to start with when using &lt;a href=&quot;https://github.com/awslabs/serverless-application-model&quot;&gt;SAM - Serverless Application Model&lt;/a&gt; and easy to &lt;a href=&quot;http://docs.aws.amazon.com/lambda/latest/dg/invoking-lambda-function.html&quot;&gt;integrate with other services&lt;/a&gt;. But there are also downsides (&lt;a href=&quot;https://news.ycombinator.com/item?id=14601809&quot;&gt;which were also discussed a lot on HN&lt;/a&gt;). Just to name a few: a) logging is a mess, b) debugging is not possible at all or c) CPU power only comes with more memory. Though I can&apos;t change b) and c), I could do something for a). Furthermore it was also a pain for me to update the Lambda code quickly, because I am using &lt;a href=&quot;https://aws.amazon.com/cloudformation/&quot;&gt;CloudFormation&lt;/a&gt;. A CloudFormation update takes a lot of time and summed up to 2:30 min for every update in my case - not acceptable for just changing one line of code. Therefore I decided to write some small CLI tools to overcome the logging problem and code updates with AWS Lambda.&lt;/p&gt;
&lt;h1&gt;Fast AWS Lambda Code Updates&lt;/h1&gt;
&lt;p&gt;Link: &lt;a href=&quot;https://github.com/seeebiii/lambda-updater&quot;&gt;https://github.com/seeebiii/lambda-updater&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This tool helps you to update only the code of one or more Lambdas from a CloudFormation stack. In order to use it correctly, you first need to deploy your stack for at least one time. Then, if you just change a few lines of code (and have &lt;strong&gt;no changes in your CloudFormation template&lt;/strong&gt;), you can use the tool. Example for NodeJS:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;lambda-updater --cfn cfn.yml --stack your-stack --target target/index.js
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This also works for JAR files. You just hand over the path to your CloudFormation template, the target file which contains your Lambda code and the stack name. The tool automatically retrieves from the stack which Lambda functions apply for an update and updates them.&lt;/p&gt;
&lt;p&gt;For my use case the update time has been reduced from 2:30 min to 10-15 seconds for NodeJS and 50-60 seconds for Java JAR files. A big issue is here that the file size plays an important role, but nevertheless I was able to cut the update time by more than the half. Of course, in case something changes in the CloudFormation template I still have to use a regular stack update.&lt;/p&gt;
&lt;h1&gt;Improved AWS Lambda Logs&lt;/h1&gt;
&lt;p&gt;Link: &lt;a href=&quot;https://github.com/seeebiii/lambdalogs&quot;&gt;https://github.com/seeebiii/lambdalogs&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;AWS Lambda always stores your log outputs to CloudWatch logs. This is nice, because the use of CloudWatch is free. But it&apos;s also a little bit cumbersome to use. You always have to select your Lambda log group and then the appropriate log stream. For example, it&apos;s not possible to trace a request over multiple Lambdas which can make life easier if you have a larger stack. Therefore I&apos;ve decided to write my own small tool, especially to view such logs over multiple Lambda functions. You can use it like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;lambdalogs --stack your-stack --filter &apos;any log filter&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s working similar to the Lambda updater tool above: it search the CloudFormation stack for Lambda functions, collects the physical names of them, builds the appropriate Lambda log group name and searches all log groups using a &lt;a href=&quot;http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html&quot;&gt;CloudWatch filter pattern&lt;/a&gt;. There are far more options to customize the output of the search, so please check out the &lt;a href=&quot;https://github.com/seeebiii/lambdalogs&quot;&gt;Github page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You might ask yourself &quot;How do you know the log group name of a Lambda?&quot; Well, it&apos;s always like this: &quot;&lt;em&gt;/aws/lambda/PHYSICAL_RESOURCE_NAME&lt;/em&gt;&quot;. So, if you have the resource name, it&apos;s easy to do the rest.&lt;/p&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;Tell me what you think of the tools! I&apos;d love to hear your feedback :) Also consider &lt;a href=&quot;/2017/06/24/5-things-consider-writing-lambda-function/&quot;&gt;5 Things To Consider For Writing A Lambda Function&lt;/a&gt; if you&apos;re into AWS Lambda development.&lt;/p&gt;
</content:encoded></item><item><title>5 Things To Consider For Writing A Lambda Function</title><link>https://www.sebastianhesse.de/2017/06/24/5-things-consider-writing-lambda-function/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2017/06/24/5-things-consider-writing-lambda-function/</guid><description>5 essential tips for writing AWS Lambda functions: choosing a runtime, using CloudFormation, attaching policies correctly, and improving cold starts.</description><pubDate>Sat, 24 Jun 2017 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A few years ago, Amazon Web Services (AWS) launched it&apos;s new service &lt;a href=&quot;https://aws.amazon.com/lambda/&quot;&gt;AWS Lambda&lt;/a&gt;. Since its start the service is generating more and more interest for the whole world of &lt;a href=&quot;https://en.wikipedia.org/wiki/Serverless_computing&quot;&gt;serverless computing&lt;/a&gt;. I&apos;ve also started using it this year and today I want to share 5 things with you what I think is important when developing such functions.&lt;/p&gt;
&lt;h1&gt;1. Think about the use case&lt;/h1&gt;
&lt;p&gt;Lambdas are a good fit for event processing which is not required to have a super low latency. If you have to process a stream or would like to build a fancy workflow (maybe using &lt;a href=&quot;https://aws.amazon.com/step-functions/&quot;&gt;Step Functions&lt;/a&gt;?), Lambdas are the way to go! They are small and can do one certain step of a process. And the good thing is you don&apos;t have to take care of the provisioning. But compared to using it as a backend for an API, you should consider another solution. This is due to the cold start of a Lambda which can take a few milliseconds (e.g. for small sized NodeJS functions) or up to a few seconds (especially Java functions need more time to start because of the JVM). If your API has effects on the user experience (e.g. you want to load some data asynchronously), it&apos;s might be worth to evaluate other solutions.&lt;/p&gt;
&lt;h1&gt;2. Choose your language&lt;/h1&gt;
&lt;p&gt;It&apos;s important to think about this before you start developing your function. Generally, I&apos;d suggest writing your functions using NodeJS, because the packages are mostly very small (see also point 5) and this improves the startup time. There is also a high support of NodeJS libraries out there to support your development. But there might be some cases where you have to use e.g. Java. One use case might be that you are migrating an existing application to AWS Lambda and this app is already written in Java. Then it makes sense to reuse existing code in order to avoid new bugs in your business logic. But again, consider point one: &lt;a href=&quot;/2021/02/14/using-spring-boot-on-aws-lambda-clever-or-dumb/&quot;&gt;using Java makes the most sense if your Lambdas are working in the background&lt;/a&gt; without expecting a low latency. Here are some small &lt;a href=&quot;/2017/08/01/starter-projects-for-aws-lambda-using-nodejs-and-java/&quot;&gt;starter snippets for NodeJS and Java&lt;/a&gt;:&lt;/p&gt;
&lt;h3&gt;NodeJS&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;module.exports.handler = function(event, context, callback) {
  // use event to access event data, like from S3, Api Gateway or similar
  // use callback(null, { }); to send a &quot;positive&quot; response
  // use callback(&apos;error&apos;) to send an error response
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Java&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;

import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

public class AbstractRequestStreamHandler implements RequestStreamHandler {

    @Override
    public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
        // use inputStream to read the event data
        // use outputStream to write some response data
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Personally, I prefer to use the RequestStreamHandler interface, because you are more flexible to use your own processing of the event data. This comes in handy if you want to do some custom format mapping. But that&apos;s up to you!&lt;/p&gt;
&lt;h1&gt;3. Use a CloudFormation template&lt;/h1&gt;
&lt;p&gt;This point is the most important one in my opinion: always use &lt;a href=&quot;/2019/07/21/going-serverless-why-and-how-1/&quot;&gt;Infrastructure as Code&lt;/a&gt;! It ensures that you can work on the same infrastructure with different team members, because you always get the same result. This is really valuable! Though it was quite complicated to setup a Lambda in the beginning using CloudFormation, AWS has improved this a lot by publishing the &lt;a href=&quot;https://github.com/awslabs/serverless-application-model&quot;&gt;Serverless Application Model&lt;/a&gt;. With this type, it&apos;s easier to declare a function and map it to a supported event. Here is an example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SampleLambda:
  Type: AWS::Serverless::Function
  Properties:
    Handler: index.handler
    Runtime: nodejs6.10
    CodeUri: target
    Policies:
      - AWSLambdaBasicExecutionRole
    Environment:
      Variables:
        SOME_VAR: &quot;My nice environment variable&quot;
    Events:
      GetResource:
        Type: Api
        Properties:
          Path: /api/hello
          Method: get
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example a Lambda function gets triggered if a GET request is sent to &quot;/api/hello&quot;. It&apos;s using NodeJS 6.10 and also gets an environment variable injected. Environment variables are a really nice feature to reference other resources like AWS SQS, S3, etc.; Another important point is the Policies section which is described in the next section. To look up all possible properties, take a look at the model reference: &lt;a href=&quot;https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md&quot;&gt;https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md&lt;/a&gt; One last advice: allocate enough memory if you&apos;re using Java functions.&lt;/p&gt;
&lt;h1&gt;4. Attach policies to your Lambda&lt;/h1&gt;
&lt;p&gt;AWS has a really strict access management. This is good, because it ensure that you can&apos;t access something if you don&apos;t have the right permissions. But it can get a little bit annoying if you&apos;re deploying a huge stack, start testing your system and your logs show you that you&apos;re not allowed to access a certain resource. So, keep an eye on your policies when accessing other resources. You can choose by using managed policies (like you&apos;ve seen in the code example above which give your Lambda the ability to put your logs to a CloudWatch log stream) or write your own policies. This is how it looks like if you want to access AWS SQS for example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Policies:
  - Version: &apos;2012-10-17&apos;
    Statement:
      - Effect: &quot;Allow&quot;
        Action:
          - &quot;sqs:DeleteMessage&quot;
          - &quot;sqs:ReceiveMessage&quot;
          - &quot;sqs:SendMessage&quot;
        Resource:
          - !GetAtt [MyQueue, Arn]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s giving access to delete, receive or send a message to the referenced queue called MyQueue. If you&apos;re wondering what kind of actions are provided by AWS, you can look them up &lt;a href=&quot;http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_access-levels.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;5. Reduce your dependencies&lt;/h1&gt;
&lt;p&gt;Last but not least, it&apos;s very good if you can reduce your dependencies as much as possible. As already said above, Lambdas have a &lt;a href=&quot;/2018/12/16/caching-in-aws-lambda/&quot;&gt;cold start&lt;/a&gt; which can take up to a few seconds for Java functions and reducing the dependencies can improve this start time a lot. As an example for Java: in server applications I tend to simply include libraries like commons-lang or Guava, because they have some great helper classes and methods which make my life easier. And - of course - I don&apos;t have to reinvent the wheel. The problem is now that if you&apos;re just using one class of a dependency with a size of 500 kb and you have 4 more similar dependencies, you already have blown up your Lambda function of about 2.5MB. Just think about this if you really need to include a whole dependency for just using StringUtils.notBlank() from it (this is of course an extreme example, but I&apos;m sure there will be someone who has done it). NodeJS has a huge advantage in this case: they have great libraries like &lt;a href=&quot;https://webpack.github.io/&quot;&gt;webpack&lt;/a&gt; which are able to identify the real usages of your code and filter out all unnecessary code which reduces the output a lot.&lt;/p&gt;
</content:encoded></item><item><title>Start Dijkstra Shortest Path using JMapViewer</title><link>https://www.sebastianhesse.de/2017/01/27/start-dijkstra-shortest-path-using-jmapviewer/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2017/01/27/start-dijkstra-shortest-path-using-jmapviewer/</guid><description>Implement Dijkstra&apos;s shortest path algorithm with JMapViewer for OpenStreetMap. Visualize routes on interactive maps using Java Swing and mouse click events.</description><pubDate>Fri, 27 Jan 2017 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As mentioned in the last post to JMapViewer, I want to show you how to start the Dijkstra shortest path algorithm using JMapViewer. &lt;a href=&quot;https://gist.github.com/seeebiii/4ffabc2882590533a1ecd986c8f9ff5c&quot;&gt;Based on this Gist&lt;/a&gt;, I will &lt;strong&gt;briefly&lt;/strong&gt; explain how to call Dijkstra and visualise the shortest path.&lt;/p&gt;
&lt;p&gt;In fact, you just have to add the following code to connect your JMapViewer instance to your Graph:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;map().addMouseListener(new MouseAdapter() {
    @Override
    public void mouseClicked(MouseEvent e) {
        if (e.getButton() == MouseEvent.BUTTON1) {
            map().getAttribution().handleAttribution(e.getPoint(), true);
            ICoordinate position = map().getPosition(e.getPoint());

            // save point by using:
            // position.getLat();
            // position.getLon();

            // add a map marker to the map
            map().addMapMarker(new MapMarkerDot(position.getLat(), position.getLon()));

            if (...) {
                // if you have saved two points, then call your Dijkstra
                // dijkstra.getShortestPath(startPoint, endPoint);

                List shortestPath = ; // get list of single nodes between start and end point from dijkstra

                Layer routeLayer = new Layer(&quot;Name of your path, so you can hide it later in the map.&quot;);
                for (Node node : shortestPath) {
                    // add a marker for each point, so you can visualise the shortest path
                    MapMarkerDot marker = new MapMarkerDot(routeLayer, node.getLat(), node.getLon());
                    map().addMapMarker(marker);
                }
            }
        }
    }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Line 1: Adding a mouse listener listens for every mouse click&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Line 4: This line checks that the left mouse has been clicked.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Line 5-6: Now you have to retrieve the position of the mouse click.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Line 9-10: With the position, you can get latitude or longitude values. You should store these position information somewhere, e.g. in a class property.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Line 13: In order to support the users view, you should add a MapMarker to the map. This shows the user where he has clicked.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Line 15: If you have stored two points (i.e a user has clicked twice), then you can forward your points to Dijkstra.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Line 17: Calling &lt;a href=&quot;https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm&quot;&gt;Dijkstra&lt;/a&gt; is done within the if-body.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Line 21-26: If you have retrieved a shortest path, you should the single nodes (points) of the path in the map. This can be established by iterating through the node list and adding a MapMarker for each node.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That&apos;s it. Very easy. If you have used a Layer for the path nodes, you can easily show/hide the path in your map. Just (un-)select the layer in your application.&lt;/p&gt;
</content:encoded></item><item><title>Revert rebasing errors with Git reflog</title><link>https://www.sebastianhesse.de/2016/12/15/revert-rebasing-errors-with-git-reflog/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/12/15/revert-rebasing-errors-with-git-reflog/</guid><description>Fix Git rebase mistakes using git reflog to recover lost commits. Step-by-step guide to identifying conflicts and reviewing rebased changes after errors.</description><pubDate>Thu, 15 Dec 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently I was facing a difficult situation: me and my team we&apos;re working on a feature (using &lt;a href=&quot;https://www.atlassian.com/git/tutorials/comparing-workflows/feature-branch-workflow&quot;&gt;Feature Branch Model&lt;/a&gt;) and I wanted to rebase my code to the changes on another branch. Nothing special so far. But the problem was that there were several conflicts because we&apos;ve changed code at similar lines. After finishing the rebase and making a push force to the upstream, I&apos;ve realised that I&apos;ve made a small mistake when resolving the conflicts. Well, the problem was that I&apos;ve selected the wrong changes which should be applied. But reverting by just checking out my old changes and doing the rebase again was obviously no option. Therefore I want to share my experience with you in this blog post and give you a process how to revert such rebasing errors.&lt;/p&gt;
&lt;h1&gt;Before Rebasing&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/branch_changes_without_rebase.png&quot; alt=&quot;Git branch diagram showing diverging changes before rebase operation&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Before rebasing any changes, the situation looked like this. At some point I&apos;ve checked out a new branch called &lt;em&gt;TEST01&lt;/em&gt;. Let&apos;s say at this point &lt;strong&gt;both branches&lt;/strong&gt; had a text file with the following content:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Text 123

Changes 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now I&apos;ve made some changes (blue circles). After that, the text file might have looked like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Text 123

Changes 1

Some more changes. But nothing more.

TEST01:
- Change 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the meantime, someone else made also changes on the original/master branch (green circles) and pushed them to the upstream. The file might look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Text 123

Internal Changes 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you might guess correctly, the third line produced a conflict while rebasing, because on both branches this line has changed. On the original/master it contained the additional word &quot;Internal&quot; and on the branch &lt;em&gt;TEST01&lt;/em&gt; some more lines have been added.&lt;/p&gt;
&lt;h1&gt;After Rebasing&lt;/h1&gt;
&lt;p&gt;Let&apos;s try to go through the process of rebasing the branch &lt;em&gt;TEST01&lt;/em&gt; onto the original/master branch. In order to do this, pull the latest changes, checkout branch &lt;em&gt;TEST01&lt;/em&gt; and try to rebase: &lt;em&gt;git rebase master&lt;/em&gt;. Git will output that it has detected a conflict and you have to resolve it. It might look lik this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Text 123

&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt; 58751f995455c69cce34166888618e685fa05ae7
Internal Changes 1

=======
Changes 1

Some more changes. But nothing more.

TEST01:
- Change 1
&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; TEST01 added first change to text.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you resolve this conflict, it might introduce a rebasing error. For example: if you decide to not include the word &quot;Internal&quot;, it might not be important in this example. But if you consider having a huge software project where multiple lines are in conflict and e.g. you missed to include one important if-condition, it might screw your whole project. For simplicity, let&apos;s just assume I forgot to include &quot;Internal&quot; when resolving the conflict.&lt;/p&gt;
&lt;p&gt;After resolving the conflict, the branch should have my changes on top like in the following figure:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/branch_changes_with_rebase.png&quot; alt=&quot;Git branch diagram showing linearized commit history after rebase operation&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This might be fine for now, but what if you encounter any issues afterwards? For example 50% of your tests fail. Then you have to find the needle in the haystack.&lt;/p&gt;
&lt;h1&gt;Revert Rebasing Errors&lt;/h1&gt;
&lt;p&gt;It might be easier to review your rebase and find something you&apos;ve missed. This is where the problem can get tricky. A good starting point might be to search at the conflicting lines. You could just look at the difference of your current code and the code from the commit which was in conflict with your changes. Well... nice try! If you resolve conflicts, Git will rewrite the commit history, because commits are &lt;em&gt;immutable&lt;/em&gt;. This means you can&apos;t just go back with &lt;code&gt;git reset&lt;/code&gt;, because your history has been overwritten. So what else can you do?&lt;/p&gt;
&lt;p&gt;First, remember the conflicts you&apos;ve encountered whil rebasing. Do you have them? Fine, keep them in your mind, because now you have to iterate through them!&lt;/p&gt;
&lt;p&gt;Second, search the log history with git reflog. &lt;a href=&quot;https://git-scm.com/docs/git-reflog&quot;&gt;From the documentation&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Reference logs, or &quot;reflogs&quot;, record when the tips of branches and other references were updated in the local repository. Reflogs are useful in various Git commands, to specify the old value of a reference. For example, &lt;code&gt;HEAD@{2}&lt;/code&gt; means &quot;where HEAD used to be two moves ago&quot;, &lt;code&gt;master@{one.week.ago}&lt;/code&gt;means &quot;where master used to point to one week ago in this local repository&quot;, and so on. [...] This command manages the information recorded in the reflogs.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you see it&apos;s out put the first time, it might look like a little bit too much information in one place (especially in bigger projects). An example output might look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;c125709 HEAD@{0}: rebase finished: returning to refs/heads/TEST01-extend-text-file
c125709 HEAD@{1}: rebase: TEST01 added second change to text.txt
b1eb54c HEAD@{2}: rebase: TEST01 added first change to text.txt
58751f9 HEAD@{3}: rebase: checkout master
05e3c7c HEAD@{4}: checkout: moving from master to TEST01-extend-text-file
58751f9 HEAD@{5}: commit: Fixed laster internal changes
42bc3db HEAD@{6}: commit: Also added my internal changes
ea0ed79 HEAD@{7}: checkout: moving from TEST01-extend-text-file to master
05e3c7c HEAD@{8}: commit: TEST01 added second change to text.txt
386cd47 HEAD@{9}: commit: TEST01 added first change to text.txt
ea0ed79 HEAD@{10}: checkout: moving from master to TEST01-extend-text-file
ea0ed79 HEAD@{11}: commit: Added some changes to Text.txt
a1aa182 HEAD@{12}: commit (initial): Added Text.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this information, you just have to search for the original commit. In this case, it&apos;s line 7 where the &quot;Internal&quot; word has been introduced. With&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;git show 42bc3db
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;you can see the changes which have been made in that commit. Personally, I prefer to use a Git client like SourceTree for this, because it has a better visualisation. (Using a Git client, you can easily hand over the commit hash and see all differences). Now you can compare this to your current code and maybe fix your problems.&lt;/p&gt;
&lt;p&gt;An alternative solution is to take a look at the upstream. E.g. if you&apos;re using Bitbucket or GitHub, you can directly see the commits in your browser. But this is just possible if you haven&apos;t pushed your rebase to the upstream before (as I did with my rebase). Otherwise it might be that the commit has already been removed from the upstream.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Further links:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://stackoverflow.com/questions/134882/undoing-a-git-rebase&quot;&gt;http://stackoverflow.com/questions/134882/undoing-a-git-rebase&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://www.ocpsoft.org/tutorials/git/use-reflog-and-cherry-pick-to-restore-lost-commits/&quot;&gt;http://www.ocpsoft.org/tutorials/git/use-reflog-and-cherry-pick-to-restore-lost-commits/&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>Three inspring indie hacker projects I&apos;ve found on IndieHackers.com</title><link>https://www.sebastianhesse.de/2016/12/08/three-inspiring-indie-hacker-projects-ive-found-indiehackers-com/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/12/08/three-inspiring-indie-hacker-projects-ive-found-indiehackers-com/</guid><description>3 inspiring indie hacker projects from IndieHackers.com: SubmitHub, Logojoy, and park.io. Learn how solo founders automate processes to build profitable businesses.</description><pubDate>Thu, 08 Dec 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This summer I&apos;ve discoverd the website &lt;a href=&quot;https://www.indiehackers.com/&quot;&gt;IndieHackers.com&lt;/a&gt; which interviews &quot;hackers&quot; who&apos;ve started their own businesses (or side-projects) and earn money from it. It&apos;s really inspiring how many ideas people have and how they have realised their passion projects. Also quite nice is the fact that they have to share how much money they earn. So you as a reader can see the relation between the effort the hackers have invested in their project and which outcome it produces. Today I&apos;d like to share three of these hacker projects which have inspired me the most. All projects have in common that they&apos;re automating a process in a remarkable way which is quite interesting to see.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/indiehackers-website.jpg&quot; alt=&quot;IndieHackers.com Website&quot; /&gt;&lt;/p&gt;
&lt;h1&gt;SubmitHub&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Interview Link:&lt;/strong&gt; &lt;a href=&quot;https://www.indiehackers.com/businesses/submithub&quot;&gt;https://www.indiehackers.com/businesses/submithub&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This guy started by having a music blog and promoted new music. After a while he got lots of emails every day to listen and hopefully promote new music from users. At some point he decided to automate things and saw a chance to earn money from it by letting users pay for their submissions to him. Also, he has connected further blogs and labels to his service which receive the submissions and can rate them. He validated his idea by building a prototype and because of his large audience from his blog he was able to grow very fast. Personally, I&apos;m not into the music business, but the idea behind it is great.&lt;/p&gt;
&lt;h1&gt;Logojoy&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Interview Link:&lt;/strong&gt; &lt;a href=&quot;https://www.indiehackers.com/businesses/logojoy&quot;&gt;https://www.indiehackers.com/businesses/logojoy&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Another great indie hacker project is Logojoy where you can create your own logo. The unique key is that the logo is created by an AI-like algorithm. This means the software evaluates what kind of logos users like (e.g. colors, fonts, font sizes) and based on that it creates a new logo with your brand&apos;s name. The interview is very interesting to read, I can really recommend it.&lt;/p&gt;
&lt;h1&gt;park.io&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Interview Link:&lt;/strong&gt; &lt;a href=&quot;https://www.indiehackers.com/businesses/park-io&quot;&gt;https://www.indiehackers.com/businesses/park-io&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;park.io is a really cool project and the most inspiring one for me. It basically reserves domains which will be available in the near future. If only one user is interested in a particular domain, he will get it for 99$. Otherwise (if more than one user is interested in a domain) an auction is created and users will have 10 days to say how much they are willing to pay for it. The one with the best offer wins. The interesting thing is that the creator of park.io has automated everything. From saving the domain to doing the auctions. It&apos;s really incredible, so just read the interview.&lt;/p&gt;
</content:encoded></item><item><title>Set up JMapViewer for OSM data</title><link>https://www.sebastianhesse.de/2016/11/24/set-up-jmapviewer-osm-data/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/11/24/set-up-jmapviewer-osm-data/</guid><description>Set up JMapViewer for OpenStreetMap data in Java Swing applications. Add Maven dependencies, configure map tiles, and implement mouse interactions for map markers.</description><pubDate>Thu, 24 Nov 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently I had to work with OSM data at University and we had to provide the data by visualising them with JMapViewer. It&apos;s a small project which supports you to connect your OSM data with a Java Swing application. For example this is useful for user interactions when calculating routes with OSM data. So I started investigating the project and found their documentation: &lt;a href=&quot;http://wiki.openstreetmap.org/wiki/JMapViewer&quot;&gt;http://wiki.openstreetmap.org/wiki/JMapViewer&lt;/a&gt; It just gives a quick overview of the project, but no nice starting guide. Therefore I want to provide such a (short) tutorial here.&lt;/p&gt;
&lt;h1&gt;1. Update your Maven pom.xml&lt;/h1&gt;
&lt;p&gt;First of all you should add the josm repository to your Maven pom.xml and also add a dependency:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&amp;gt;
&amp;lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot;
         xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;
         xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&amp;gt;

    &amp;lt;!-- ... --&amp;gt;

    &amp;lt;repositories&amp;gt;
        &amp;lt;repository&amp;gt;
            &amp;lt;id&amp;gt;josm-public&amp;lt;/id&amp;gt;
            &amp;lt;name&amp;gt;josm public releases&amp;lt;/name&amp;gt;
            &amp;lt;url&amp;gt;https://josm.openstreetmap.de/nexus/content/groups/public&amp;lt;/url&amp;gt;
        &amp;lt;/repository&amp;gt;
    &amp;lt;/repositories&amp;gt;

    &amp;lt;dependencies&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.openstreetmap.jmapviewer&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;jmapviewer&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;2.0&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;!-- other dependencies --&amp;gt;
    &amp;lt;/dependencies&amp;gt;

    &amp;lt;!-- ... --&amp;gt;

&amp;lt;/project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Because the JMapViewer artifact is not available in Maven Central, you need to add the jsom repository to your pom.xml. They&apos;ve added Maven support with this issue: &lt;a href=&quot;https://josm.openstreetmap.de/ticket/12263&quot;&gt;https://josm.openstreetmap.de/ticket/12263&lt;/a&gt; . &lt;strong&gt;Note:&lt;/strong&gt; There are similar projects called &lt;a href=&quot;http://wiki.openstreetmap.org/wiki/JXMapViewer&quot;&gt;JXMapViewer&lt;/a&gt; and &lt;a href=&quot;http://wiki.openstreetmap.org/wiki/JXMapViewer2&quot;&gt;JXMapViewer2&lt;/a&gt; which won&apos;t be discussed here. The latter one is still maintained and can be found on Maven central, so you don&apos;t need any extra repository.&lt;/p&gt;
&lt;h1&gt;2. Create a running example&lt;/h1&gt;
&lt;p&gt;Now you can create a class to start &lt;strong&gt;JMapViewer&lt;/strong&gt;. Therefore you have to extend &lt;em&gt;javax.swing.JFrame&lt;/em&gt; as well as setting up the JFrame by adding a layout (&lt;strong&gt;lines 22-23&lt;/strong&gt;). You can also set some options for the map, e.g. tile loading (&lt;a href=&quot;https://wiki.openstreetmap.org/wiki/Tiles&quot;&gt;this loads the different images for a map, e.g. if you zoom&lt;/a&gt;) or that markers should be visible (&lt;strong&gt;line 29-32&lt;/strong&gt;). We won&apos;t add any markers yet, because this will be discussed in another blog post.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/**
 * Based on http://svn.openstreetmap.org/applications/viewer/jmapviewer/src/org/openstreetmap/gui/jmapviewer/Demo.java by Jan Peter Stotz
 */
public class OsmMapViewer extends JFrame implements JMapViewerEventListener {

    private static final long serialVersionUID = 1L;

    private JMapViewerTree treeMap;
    private JLabel zoomLabel;
    private JLabel zoomValue;
    private JLabel mperpLabelName;
    private JLabel mperpLabelValue;

    /**
     * Setups the JFrame layout, sets some default options for the JMapViewerTree and displays a map in the window.
     */
    public OsmMapViewer() {
        super(&quot;JMapViewer Demo&quot;);
        treeMap = new JMapViewerTree(&quot;Zones&quot;);
        setupJFrame();
        setupPanels();

        // Listen to the map viewer for user operations so components will
        // receive events and updates
        map().addJMVListener(this);

        // Set some options, e.g. tile source and that markers are visible
        map().setTileSource(new OsmTileSource.Mapnik());
        map().setTileLoader(new OsmTileLoader(map()));
        map().setMapMarkerVisible(true);
        map().setZoomContolsVisible(true);

        // activate map in window
        treeMap.setTreeVisible(true);
        add(treeMap, BorderLayout.CENTER);
    }

    // ... further methods like setupJFrame() or setupPanels()

    private JMapViewer map() {
        return treeMap.getViewer();
    }

    /**
     * @param args Main program arguments
     */
    public static void main(String[] args) {
        new OsmMapViewer().setVisible(true);
    }

    @Override
    public void processCommand(JMVCommandEvent command) {
        // ...
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally you just call &lt;code&gt;new OsmMapViewer().setVisible(true)&lt;/code&gt; (line 50) and run the code. Most noteworthy I&apos;ve removed some methods from the code above for reasons of readability. You can see the full code in &lt;a href=&quot;https://gist.github.com/seeebiii/9dce688186248644ebb373a266539437&quot;&gt;this GitHub Gist&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you run the code, the following window should be shown:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/JMapViewer_Basic_Example.jpg&quot; alt=&quot;JMapViewer Basic Example&quot; /&gt;&lt;/p&gt;
&lt;p&gt;It&apos;s not a fancy UI, but it provides some basic functionality. An alternative solution with a better UI would be &lt;a href=&quot;http://leafletjs.com/&quot;&gt;Leaflet&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In a next blog post I&apos;ll describe how to add markers and connect them to a routing algorithm.&lt;/p&gt;
</content:encoded></item><item><title>Deploy a Multi-Module Maven Project to Heroku</title><link>https://www.sebastianhesse.de/2016/10/14/deploy-multi-module-maven-project-heroku/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/10/14/deploy-multi-module-maven-project-heroku/</guid><description>Deploy multi-module Maven projects to Heroku with Spring Boot. Use config variables and Procfile to manage multiple microservices from a single repository.</description><pubDate>Fri, 14 Oct 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently I was building a private hobby project where I wanted to use Heroku to deploy some Microservices and get some experience with it. Since I&apos;m a Java enthusiast, I wanted to use a Multi-Module Maven project to also share some classes to the different microservices. So my mission was to deploy each submodule to a different Heroku app (I know this is completely against the nature of Microservices to code them all in the same language and have them in one big project like a Monolith - but I have my reasons). Getting started with Heroku was quite simple, because they have a very nice &lt;a href=&quot;https://devcenter.heroku.com/articles/getting-started-with-java#introduction&quot;&gt;guide to setup and run your first app in the cloud&lt;/a&gt;. Unfortunately Heroku only supports one Procfile per project, therefore it&apos;s not so easy to deploy multiple submodules to it. But there is way: You can use &lt;strong&gt;Config Variables&lt;/strong&gt;. Let&apos;s see step by step how to use this!&lt;/p&gt;
&lt;h2&gt;1. Create your project&lt;/h2&gt;
&lt;p&gt;First of all you start by creating a Multi-Module Maven Project with two submodules. You can find an example &lt;a href=&quot;https://github.com/seeebiii/multi-module-heroku&quot;&gt;here in my GitHub-Repository&lt;/a&gt;. I used Spring Boot and Jersey to create a simple HelloWorldResource:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import org.springframework.stereotype.Component;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.core.Response;

@Component
@Path(&quot;/hello&quot;)
public class HelloWorldResource {

    @GET
    public Response sayHelloWorld() {
        return Response.ok(&quot;Hello World One!&quot;).build();
    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It just returns a simple &quot;Hello World One&quot; (there is a similar resource for the other submodule returning &quot;Hello World Two&quot;). You also need to add a configuration class for Jersey which makes the &lt;em&gt;HelloWorldResource&lt;/em&gt; available:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import org.glassfish.jersey.server.ResourceConfig;
import org.springframework.context.annotation.Configuration;

/**
 * Jersey configuration class. Uses package to scan for resources.
 */
@Configuration
public class JerseyConfig extends ResourceConfig {

    public JerseyConfig() {
        super();
        // you can either call register(HelloWorldResource.class) or
        // you can be lazy and just set the package to 
        // search for the REST resource(s)
        packages(&quot;your.package.name&quot;);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Also create the Application starter class for Sprint Boot:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

/**
 * Starter for child one application.
 */
@SpringBootApplication
public class ChildOneApplication extends SpringApplication {

    public static void main(String[] args) {
        SpringApplication.run(ChildOneApplication.class, args);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should repeat that for your second submodule to be able to check later if the modules have been deployed successfully.&lt;/p&gt;
&lt;h3&gt;The Procfile&lt;/h3&gt;
&lt;p&gt;Especially relevant for your successful submodule deployment is a &lt;a href=&quot;https://devcenter.heroku.com/articles/procfile&quot;&gt;custom Procfile for your Heroku app&lt;/a&gt; like the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;web: java -Dserver.port=$PORT -jar $PATH_TO_JAR
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This file just contains one line and will be placed in the root directory of your project (compare my GitHub repository). Heroku identifies the kind of your app by the keyword &lt;strong&gt;web&lt;/strong&gt;. Due to the variable &lt;code&gt;$PATH_TO_JAR&lt;/code&gt; we are able to set a custom path to our jar file later. The jar file will be generated by Maven/Spring Boot. You should add the &lt;a href=&quot;http://docs.spring.io/spring-boot/docs/current/reference/html/build-tool-plugins-maven-plugin.html&quot;&gt;Sprint Boot Maven Plugin&lt;/a&gt; to your pom.xml:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;build&amp;gt;
    &amp;lt;plugins&amp;gt;
        &amp;lt;plugin&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-maven-plugin&amp;lt;/artifactId&amp;gt;
        &amp;lt;/plugin&amp;gt;
        &amp;lt;!-- ... --&amp;gt;
    &amp;lt;/plugins&amp;gt;
&amp;lt;/build&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Sprint Boot Maven Plugin creates an executable jar file of your application which will automatically call your starter application class (here: &lt;em&gt;ChildOneApplication&lt;/em&gt;). To see the full setup of each pom.xml file, please take a look at &lt;a href=&quot;https://github.com/seeebiii/multi-module-heroku&quot;&gt;my GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;2. Setup your Heroku app&lt;/h2&gt;
&lt;p&gt;In order to deploy the app, you have to use a Git repository. So either &lt;a href=&quot;https://devcenter.heroku.com/articles/github-integration&quot;&gt;connect your GitHub account to Heroku&lt;/a&gt; or &lt;a href=&quot;https://devcenter.heroku.com/articles/git&quot;&gt;create a new Git repository connected to Heroku&lt;/a&gt;. I prefer to connect GitHub to my Heroku account, because I manage most of my projects there. In the case you connect your GitHub account, you then need to create a new app in Heroku and select the repository of your Multi-Module Maven Project. If you&apos;ve directly connected your Git repository to Heroku, this step isn&apos;t necessary. After creating the app, you need to go to the settings of your app and add a &lt;strong&gt;Config Variable&lt;/strong&gt; like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/heroku-add-config-var.jpg&quot; alt=&quot;Add Config Variable in Heroku&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The value should match the pattern &lt;code&gt;&amp;lt;submodule&amp;gt;_target_&amp;lt;jar-file-name&amp;gt;.jar&lt;/code&gt; where you should replace &lt;code&gt;&amp;lt;submodule&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;jar-file-name&amp;gt;&lt;/code&gt; with the real values. You should repeat the steps for every submodule you want to deploy, i.e.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create the app in Heroku&lt;/li&gt;
&lt;li&gt;Connect your repository (if necessary)&lt;/li&gt;
&lt;li&gt;Add the Config Variable for &lt;strong&gt;PATH_TO_JAR&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;3. Deploy and Test&lt;/h2&gt;
&lt;p&gt;Finally you are now able to deploy your submodule as a HelloWorld Microservice. Just go to &apos;&lt;em&gt;Deploy&lt;/em&gt;&apos; and click &apos;&lt;em&gt;Deploy Branch&lt;/em&gt;&apos; at the bottom of the page and Heroku will automatically checkout the repository, build your project and start the application. Hence you should go to &lt;strong&gt;https://your-app-name.herokuapp.com/hello&lt;/strong&gt; and check if it returns &quot;Hello World One!&quot;, respectively &quot;Hello World Two!&quot;.&lt;/p&gt;
</content:encoded></item><item><title>Getting notified about new JIRA issues</title><link>https://www.sebastianhesse.de/2016/07/14/getting-notified-about-new-jira-issues/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/07/14/getting-notified-about-new-jira-issues/</guid><description>Stay updated on new Jira issues with filter subscriptions. Create saved searches with custom criteria and receive automated email notifications on your schedule.</description><pubDate>Thu, 14 Jul 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://www.atlassian.com/software/jira&quot;&gt;JIRA&lt;/a&gt; is a great issue tracking software by Atlassian and it offers many features to keep your bugs and tasks organised. However, if you&apos;re using it for a while and your project grows bigger and bigger, it can get quite difficult to stay updated on your issues. What I mean is you can&apos;t keep track of all new issues by yourself. Especially if you have a public JIRA instance where all of your customers can add issues. So let&apos;s see what JIRA offers to you in order to support your wish to get the newest issues of your project.&lt;/p&gt;
&lt;h1&gt;Search for Issues&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/jira-search-issues-menu.png&quot; alt=&quot;Select the option &amp;quot;Search for issues&amp;quot; in JIRA&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The easiest way to retrieve new issues is to search for them. JIRA offers a powerful search mechanism where you can search for... well, nearly everything related to your issues. From basic things like the issue&apos;s project or some text within your issue, you can also use a more fine-grained search: (&lt;a href=&quot;https://confluence.atlassian.com/jirasoftwarecloud/advanced-searching-764478330.html&quot;&gt;or maybe use the Advanced search if you&apos;re familiar with JQL&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/jira-search-details.png&quot; alt=&quot;Search for details of your JIRA issue&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Within the list of criterias you see on the image, you will find &lt;em&gt;Created Date&lt;/em&gt;. If you select it, a new popup opens where you can define some attributes. E.g. you can select a range of days or even minutes that the issue&apos;s &lt;em&gt;Created Date&lt;/em&gt; attribute should match.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/jira-created-date-filter.png&quot; alt=&quot;Use &amp;quot;Created Date&amp;quot; to select a range of days or minutes&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This is exactly what we&apos;ve been looking for! For example, you can set &lt;em&gt;Within the last 24 hours&lt;/em&gt; and you will see all issues which have been created in the last 24 hours. Nice! But to be honest, we don&apos;t want to figure this out every time we&apos;re looking for the newest issues.&lt;/p&gt;
&lt;h1&gt;Issue Filters and Subscriptions&lt;/h1&gt;
&lt;p&gt;Of course there is a way to save this search for the next time. Just click the &lt;em&gt;Save&lt;/em&gt; button next to the heading and give your search a meaningful name. JIRA stores these search as a filter (&lt;a href=&quot;https://confluence.atlassian.com/jirasoftwarecloud/saving-your-search-as-a-filter-764478344.html#Savingyoursearchasafilter-sharing_filtersSharingafilter&quot;&gt;filters can also be shared within your instance&lt;/a&gt;). This comes in very handy in order to reach our original goal: &lt;strong&gt;Getting notified about new JIRA issues&lt;/strong&gt;. You can achieve this by opening &lt;em&gt;View all filters&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/jira-view-all-filters.png&quot; alt=&quot;Open &amp;quot;View all filters&amp;quot; in JIRA&quot; /&gt;&lt;/p&gt;
&lt;p&gt;By default this shows you a list of your favourite filters. The interesting thing is that you can add a subscription for each of your filters. These subscriptions will notify you with a status based on your desired time and interval:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/jira-filter-subscription-settings.png&quot; alt=&quot;Manage your filter subscription by adding time and interval options&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Awesome! This is exactly what we want to keep an eye of the new issues in our project. Combined with the powerful search and filter mechanism in JIRA, you can subscribe to many individual views on your project. Have fun using the filter subscriptions!&lt;/p&gt;
</content:encoded></item><item><title>How to test a web app with popups using Selenium</title><link>https://www.sebastianhesse.de/2016/07/06/how-to-test-a-web-app-with-popups-using-selenium/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/07/06/how-to-test-a-web-app-with-popups-using-selenium/</guid><description>Test hover popups with Selenium using Actions class and WebDriverWait. Solve the common problem of timing issues when popups don&apos;t open immediately.</description><pubDate>Wed, 06 Jul 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Some time ago I had to test a web app where a popup was triggered if the user hovers over a specific link. This is not as easy as testing if an element contains a specific text. But it&apos;s possible using &lt;a href=&quot;https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/interactions/Actions.html&quot;&gt;Selenium Actions&lt;/a&gt;. These class provides methods to perform some custom gestures on a page, like moving to an element. Here is an example how to do this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// In the following a selector is something like By.id(&quot;identifier&quot;)

// use the Actions class from Selenium to perform custom &quot;mouse&quot; actions
Actions builder = new Actions(driver);

// move &quot;mouse&quot; to popup link which will open the popup and
// then move to the popup in order to avoid automatic closing of it
builder.moveToElement(driver.findElement(POPUP\_LINK\_SELECTOR))
       .moveToElement(driver.findElement(POPUP\_SELECTOR))
       .build()
       .perform();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example Selenium is initiating a move to a popup link which triggers a popup to open. Then it moves to the popup in case it is closing automatically. It&apos;s as simple as that! But it can be a little bit more complicated if the popup won&apos;t open directly, i.e. if it waits a few miliseconds to open - for whatever the reason is. Then you have to wait for it in your test. For this use case you can use a &lt;a href=&quot;http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp#explicit-waits&quot;&gt;WebDriverWait&lt;/a&gt; combined with &lt;a href=&quot;http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp#expected-conditions&quot;&gt;ExpectedConditions&lt;/a&gt; to wait for an element to be visible:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Again: a selector is something like By.id(&quot;identifier&quot;)

// move &quot;mouse&quot; to the popup link which will open the popup
builder.moveToElement(driver.findElement(POPUP\_LINK\_SELECTOR)
       .build()
       .perform();
        
// wait 1 second for popup to be open and make sure it is visible
WebElement popup = new WebDriverWait(driver, 1).until(ExpectedConditions.visibilityOfElementLocated(POPUP\_SELECTOR));
assertTrue(&quot;popup should be visible&quot;, popup.isDisplayed());

// move &quot;mouse&quot; to popup in order to avoid that it&apos;s closing automatically
builder.moveToElement(driver.findElement(POPUP\_SELECTOR))
       .build()
       .perform();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Personally I prefer the second version, because in the first one you can&apos;t always be sure that Selenium is fast enough (or maybe even too fast) to find the popup. Thus it&apos;s a good idea to use a waiter.&lt;/p&gt;
&lt;p&gt;If you want to try it out, you can find an example project here: &lt;a href=&quot;https://github.com/seeebiii/SeleniumHoverExample&quot;&gt;https://github.com/seeebiii/SeleniumHoverExample&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>Command line tool to quickly start a Confluence standalone instance</title><link>https://www.sebastianhesse.de/2016/05/30/command-line-tool-quickly-start-confluence-standalone-instance/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/05/30/command-line-tool-quickly-start-confluence-standalone-instance/</guid><description>Start Confluence standalone instances in seconds with confluence-starter CLI. Automate downloads, configuration, and setup for addon development and testing.</description><pubDate>Mon, 30 May 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Since I&apos;m working with Atlassian Confluence addons, I always have the problem that I need to start a local &lt;strong&gt;Confluence standalone instance&lt;/strong&gt; in a specific version. This is often annoying, because you always have to download the zip file, unzip it and adjust some settings files (of course you can use the Atlassian Plugin SDK, but this has some drawbacks if you want to reproduce bugs). For example you have to add a home directory where Confluence stores the application data or add a line to be able to debug the Confluence addon you&apos;re developing. The way I did it was very error prone, because I had to follow a few steps manually. Then a few weeks ago I got the idea to create a script for it. The problem was/is: I don&apos;t like native bash/shell scripts that much. So what&apos;s the alternative? I decided to create a &lt;strong&gt;NodeJS module&lt;/strong&gt; using some external libs and provide a command line tool. Make sure to check out the project and test it: &lt;a href=&quot;https://bitbucket.org/sebastianhesse/confluence-starter&quot;&gt;confluence-starter Bitbucket Repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;With the confluence-starter CLI you can select a Confluence version which will be downloaded, unzipped and prepared in terms of developer settings like (debug) port, application context path, batching, minification, etc. and it will be started automatically:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Downloads, unzips, prepares and starts Confluence instance on default port 1990
$ conf-starter start 5.9.6
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can also set some other settings by adding optional parameters to the command or list the already downloaded versions:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# add optional parameters: port, context path and debug port
$ conf-starter start 5.9.6 -p 1991 -c /conf -d 5005

# list already downloaded versions
$ conf-starter list

# clear home directory of a downloaded version
$ conf-starter clean 5.9.6
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you have any problems, please raise an issue in the repository. The next step is to push it to NPM and also create a GUI for it, so wait for an update! :)&lt;/p&gt;
</content:encoded></item><item><title>Apache Troubleshooting and Nginx Rescue</title><link>https://www.sebastianhesse.de/2016/02/09/apache-troubleshooting-and-nginx-rescue/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2016/02/09/apache-troubleshooting-and-nginx-rescue/</guid><description>Migrate from Apache to Nginx to solve high CPU usage and timeout issues. Complete tutorial including WordPress configuration, MySQL backup, and permalink setup.</description><pubDate>Tue, 09 Feb 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The last weeks my website was not available due to some problems with my Apache server. When I opened my website it took me more than 5 minutes to get any answer. I first thought my Virtual Server had a defect, so I restarted it. It didn&apos;t help. I had to investigate this problem a little bit more.&lt;/p&gt;
&lt;h2&gt;Apache Restarts and High CPU Usage&lt;/h2&gt;
&lt;p&gt;After trying again and again I realized that when I got an answer, it was only some unformatted text like from the 90&apos;s. Maybe Wordpress was the reason for that? I&apos;ve added a simple &lt;em&gt;index.html&lt;/em&gt; file instead of the regular &lt;em&gt;index.php&lt;/em&gt; from Wordpress which unfortunately didn&apos;t solve the problem. The next thing I&apos;ve tried was restarting Apache. The result? Nothing changed. That was frustrating.&lt;/p&gt;
&lt;p&gt;I&apos;ve checked the CPU usage just to make sure that everything was fine. The CPU usage was very high because of Apache processes which was unusual from my point of view, because the traffic on my website is very low. I did some research on that and found some Stack Overflow threads where the same issue was described. They suggested to decrease Apache workers (didn&apos;t work) and some other tweaks which didn&apos;t work as well. I needed another solution, because I wanted my website to be online again, so I started searching for Nginx.&lt;/p&gt;
&lt;h2&gt;Solving the Issue with Nginx&lt;/h2&gt;
&lt;p&gt;Before you do anything you should do a backup of your files. If you forgot how to do this or you&apos;re not a Linux pro, here is a short example of how to do a backup of MySQL and Wordpress:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# the shell will prompt you for the password of \[user\]
$ mysqldump -u \[user\] -p \[database\] &amp;gt; /your/folder/for/backup.sql

# backup wordpress folder
$ tar -cfv /your/folder/for/wordpress/backup.tar.gz /your/folder/or/file
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;More information about MySQL backups: &lt;a href=&quot;http://www.thegeekstuff.com/2008/09/backup-and-restore-mysql-database-using-mysqldump/&quot;&gt;http://www.thegeekstuff.com/2008/09/backup-and-restore-mysql-database-using-mysqldump/&lt;/a&gt; More information about archiving: &lt;a href=&quot;http://www.tecmint.com/18-tar-command-examples-in-linux/&quot;&gt;http://www.tecmint.com/18-tar-command-examples-in-linux/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now I was able to move forward and found a nice tutorial about &lt;a href=&quot;https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-centos-6&quot;&gt;how to install Nginx, MySQL and PHP on CentOS&lt;/a&gt; which helped me a lot! It explains very nicely what you have to do. In my case I only had to adopt a Nginx configuration url to my Wordpress folder. After starting Nginx I was quite happy that everything worked as expected. But soon I realized that it was not everything working, e.g. Wordpress permalinks and some plugins didn&apos;t work. The latter was very easy to explain: while changing to Nginx I&apos;ve also did a manual update of Wordpress and forgot to copy the plugins from my backup (thanks to my backup everything went well after copying them back). The Wordpress permalinks issue was a little bit different to explain: if you use Apache and Wordpress and want to use permalinks, Wordpress adds automatically an &lt;em&gt;.htaccess&lt;/em&gt; file to your folder in order to setup permalinks for Apache. Nginx doesn&apos;t have the feature for that. You have to change the configuration file by yourself. The following blog post explains it: &lt;a href=&quot;http://nginxlibrary.com/wordpress-permalinks/&quot;&gt;http://nginxlibrary.com/wordpress-permalinks/&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;The whole process took me about 3-4 hours which is quite a small time frame compared to the days my website was not available. (The reason is that I realized it very late and I also had to study for an important exam) Now I&apos;m quite happy that everything works fine again, but I would like to know the real issue why Apache wasn&apos;t working. Maybe I will investigate it again.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What I&apos;ve learned&lt;/strong&gt; from this story is that I will create a small script (or use some existing software) to smoke test my website and get notified if it&apos;s not online!&lt;/p&gt;
</content:encoded></item><item><title>Handle URL parameters with AngularJS</title><link>https://www.sebastianhesse.de/2015/12/31/handle-url-parameters-with-angularjs/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2015/12/31/handle-url-parameters-with-angularjs/</guid><description>Master AngularJS URL handling with $routeProvider and $routeParams. Learn path parameters, query parameters, and HTML5 pushState mode for clean URLs.</description><pubDate>Thu, 31 Dec 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;AngularJS provides several ways to use URL parameters in order to serve different views to a user or hand over information by using the URL. In this tutorial I will explain the different options you can use, e.g. for a URL like &lt;code&gt;/path/:param/with/query?foo=bar&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Important to know:&lt;/h2&gt;
&lt;h3&gt;a) Angular&apos;s URL Handling&lt;/h3&gt;
&lt;p&gt;By default Angular uses the # (hash) to switch between different views. For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;some-domain.com/index.html#/about&lt;/code&gt; -&amp;gt; Shows the view which is mapped to &quot;/about&quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;some-domain.com/index.html#/contact&lt;/code&gt; -&amp;gt; Shows the view which is mapped to &quot;/contact&quot;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Another way is to use the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/API/History_API&quot;&gt;HTML Push State feature&lt;/a&gt;. This adds another entry to a browser&apos;s history which is similar to calling &lt;code&gt;window.location = &quot;#foo&quot;;&lt;/code&gt; in JavaScript.&lt;/p&gt;
&lt;h3&gt;b) Important AngularJS Modules&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;$routeProvider:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Goal: used to define the location of a view, e.g. maps &quot;/about&quot; to &quot;views/about.html&quot;&lt;/p&gt;
&lt;p&gt;Angular Documentation: &lt;a href=&quot;https://docs.angularjs.org/api/ngRoute/provider/$routeProvider&quot;&gt;$routeProvider&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;$routeParams:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Goal: used to retrieve path and query parameters, e.g. retrieve id parameter of &quot;/user/:id&quot;&lt;/p&gt;
&lt;p&gt;Angular Documentation: &lt;a href=&quot;https://docs.angularjs.org/api/ngRoute/service/$routeParams&quot;&gt;$routeParams&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;$locationProvider:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Goal: used to set properties for handling different URL paths, e.g. set HTML5 mode for using push state&lt;/p&gt;
&lt;p&gt;Angular Documentation: &lt;a href=&quot;https://docs.angularjs.org/api/ng/provider/$locationProvider&quot;&gt;$locationProvider&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;$location:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Goal: used to retrieve information from URL or manipulate it, e.g. change URL to &quot;/another/view&quot; in order to change to another view&lt;/p&gt;
&lt;p&gt;Angular Documentation: &lt;a href=&quot;https://docs.angularjs.org/api/ng/service/$location&quot;&gt;$location&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Usage in AngularJS:&lt;/h2&gt;
&lt;h3&gt;1.) Path Parameters&lt;/h3&gt;
&lt;p&gt;In order to provide a more flexible application, you should use path parameters. A use case might be to save a specific user based on its id. Path parameters are defined as &lt;code&gt;/:param&lt;/code&gt; within AngularJS. Examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;/user/save/:id&lt;/code&gt; -&amp;gt; id is a required parameter&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;/user/save/:id?&lt;/code&gt; -&amp;gt; id is an optional parameter&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;/user/save/:id*&lt;/code&gt; -&amp;gt; id can occur multiple times&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now you might wonder how these are implemented in Angular? Here is an example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// in order to make this code work, you have to add ng-controller=&quot;ExampleCtrl&quot; to a view or add controller:&apos;ExampleCtrl&apos; after templateUrl 

angular.module(&apos;example&apos;, [&apos;ngRoute&apos;])
  // register the route for the view save\_user.html
  .config([&apos;$routeProvider&apos;, function ($routeProvider) {
    $routeProvider.when(&apos;/user/save/:userId&apos;, {
      templateUrl: &apos;save\_user.html&apos;
    });
  }])

  // now register the controller
  .controller(&apos;ExampleCtrl&apos;, [&apos;$routeParams&apos;, &apos;$location&apos;,
    function ($routeParams, $location) {
      
      // e.g. read userId parameter by directly accessing $routeParams
      var userId = $routeParams.userId;

     // do sth. with the user id
  }]);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to change the route by yourself, you should use &lt;code&gt;$location&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$location.url(...) -&amp;gt; Change the whole route
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;URL before: ...#/some/route?my=param
$location.url(&apos;/any/route&apos;)
URL after: ...#/any/route
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;$location.path(...)&lt;/code&gt; -&amp;gt; Only change the path, but not the URL query parameters&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;URL before: ...#/some/route?my=param
$location.path(&apos;/any/route&apos;)
URL after: ...#/any/route?my=param
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2.) Query Parameters&lt;/h3&gt;
&lt;p&gt;AngularJS also provides a way to manipulate URL query parameters which are appended to an URL after the question mark, e.g. &lt;code&gt;?foo=bar&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;$location.search({...})&lt;/code&gt; -&amp;gt; Only change the query parameters, but not the path&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;URL before: ...#/some/route?my=param
$location.search(&apos;my&apos;, &apos;another&apos;)
URL after: ...#/some/route?my=another
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you want to retrieve these query parameters, you can use $routeParams as well like shown before. But you should consider that path parameters have a higher priority than query parameters, i.e. if parameters have the same name, the path parameter always wins.&lt;/p&gt;
</content:encoded></item><item><title>Using AngularJS and Spring together</title><link>https://www.sebastianhesse.de/2015/12/23/using-angularjs-and-spring-together/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2015/12/23/using-angularjs-and-spring-together/</guid><description>Build web apps with AngularJS and Spring Boot. Complete tutorial from Spring Initializr setup to AngularJS routing with single page application architecture.</description><pubDate>Wed, 23 Dec 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;With this blog post I want to provide an example webapp using Spring and AngularJS since both are very popular technologies. The webapp is created using Spring Initializr, Spring Boot and an example AngularJS project. It&apos;s a step-by-step tutorial with some explanations. The repository can be found in this repository: &lt;a href=&quot;https://github.com/seeebiii/SpringAngularExample&quot;&gt;https://github.com/seeebiii/SpringAngularExample&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Let&apos;s start:&lt;/strong&gt;&lt;/p&gt;
&lt;h1&gt;1. Create Initial Project&lt;/h1&gt;
&lt;p&gt;Create a new Java project using Spring Boot. You could either use &lt;a href=&quot;https://start.spring.io/&quot;&gt;Spring Initializr&lt;/a&gt; or the new project dialogs in your preferred IDE, e.g. IntelliJ IDEA supports Spring Initializr out of the box. Nonetheless the most important point is that you include dependencies to Spring Boot Web, JPA, JDBC and a database in your pom.xml. The following pom.xml includes these and the Spring Boot Maven plugin as well:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&amp;gt;
&amp;lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;
         xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&amp;gt;

    &amp;lt;!-- Put some Maven configuration here --&amp;gt;

    &amp;lt;!-- It is important to add the boot starter parent! --&amp;gt;
    &amp;lt;parent&amp;gt;
        &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
        &amp;lt;artifactId&amp;gt;spring-boot-starter-parent&amp;lt;/artifactId&amp;gt;
        &amp;lt;version&amp;gt;1.3.1.RELEASE&amp;lt;/version&amp;gt;
        &amp;lt;relativePath/&amp;gt;
        &amp;lt;!-- lookup parent from repository --&amp;gt;
    &amp;lt;/parent&amp;gt;

    &amp;lt;!-- Now define the dependencies to the other Spring boot projects --&amp;gt;
    &amp;lt;dependencies&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-data-jpa&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-jdbc&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-web&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;

        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.hsqldb&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;hsqldb&amp;lt;/artifactId&amp;gt;
            &amp;lt;scope&amp;gt;runtime&amp;lt;/scope&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-test&amp;lt;/artifactId&amp;gt;
            &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt;
        &amp;lt;/dependency&amp;gt;
    &amp;lt;/dependencies&amp;gt;

    &amp;lt;!-- Optional: Add Spring Boot Maven Plugin to start your webapp with &apos;mvn spring-boot:run&apos; --&amp;gt;
    &amp;lt;build&amp;gt;
        &amp;lt;plugins&amp;gt;
            &amp;lt;plugin&amp;gt;
                &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;spring-boot-maven-plugin&amp;lt;/artifactId&amp;gt;
            &amp;lt;/plugin&amp;gt;
        &amp;lt;/plugins&amp;gt;
    &amp;lt;/build&amp;gt;

&amp;lt;/project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See the &lt;a href=&quot;https://github.com/seeebiii/SpringAngularExample/blob/master/pom.xml&quot;&gt;full example pom.xml here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Your project folder should look like this after these steps:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/blog/spring-angularjs-directory-structure.jpg&quot; alt=&quot;Spring Boot and AngularJS project directory structure showing Maven layout&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Spring Initializr creates a default Application class to start your application. Here is an example and explanation for this default class: &lt;a href=&quot;https://docs.spring.io/spring-boot/docs/current/reference/html/using-boot-using-springbootapplication-annotation.html&quot;&gt;https://docs.spring.io/spring-boot/docs/current/reference/html/using-boot-using-springbootapplication-annotation.html&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;2. Check Your Webapp&lt;/h1&gt;
&lt;p&gt;As you can see on the project structure image, there is an index.html file available in the folder /webapp. This is a basic HTML file which I&apos;ve created in order to do a &lt;a href=&quot;https://en.wikipedia.org/wiki/Smoke_testing_(software)&quot;&gt;smoke test&lt;/a&gt;, i.e. check if everything is working fine. Start a local webserver by running the main method of &lt;strong&gt;SpringAngularExampleApplication&lt;/strong&gt; or by using the Spring Boot Maven Plugin with &lt;strong&gt;spring-boot:run&lt;/strong&gt;. Both ways a Tomcat server is started if you don&apos;t specify any specific webserver. Now try to reach your webapp by opening &lt;strong&gt;http://localhost:8080/index.html&lt;/strong&gt; in your web browser. If you&apos;ve put any content into your index.html, then this content should be shown after loading.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Use the webapp folder of your Java project and keep every HTML, CSS and JS files in there to update them without a server reload later.&lt;/p&gt;
&lt;h1&gt;3. Add AngularJS&lt;/h1&gt;
&lt;p&gt;Now you can add AngularJS related stuff. Doing it the easy way, you can create a single HTML file and include Angular and your own JavaScript. Example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang=&quot;en&quot;&amp;gt;
&amp;lt;head&amp;gt;
  &amp;lt;meta charset=&quot;UTF-8&quot;&amp;gt;
  &amp;lt;script src=&quot;https://ajax.googleapis.com/ajax/libs/angularjs/1.4.5/angular.js&quot;&amp;gt;&amp;lt;/script&amp;gt;
  &amp;lt;script type=&quot;text/javascript&quot;&amp;gt;
    var app = angular.module(&apos;exampleApp&apos;, []);
  &amp;lt;/script&amp;gt;
  &amp;lt;title&amp;gt;Spring Angular Example&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body ng-app=&quot;exampleApp&quot;&amp;gt;

  My Spring Angular Example!
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s it! We load Angular from Googles CDN and create an Angular module. The important lines 7 and 11 for this initialisation are marked. Now your webapp should work with Angular. You can try it by relaoding your browser at http://localhost:8080/index.html. Unfortunately we can&apos;t see any action or feature using Angular, so let&apos;s add Angulars routing mechanism:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang=&quot;en&quot;&amp;gt;
&amp;lt;head&amp;gt;
  &amp;lt;meta charset=&quot;UTF-8&quot;&amp;gt;
  &amp;lt;script src=&quot;https://ajax.googleapis.com/ajax/libs/angularjs/1.4.5/angular.js&quot;&amp;gt;&amp;lt;/script&amp;gt;
  &amp;lt;script src=&quot;https://ajax.googleapis.com/ajax/libs/angularjs/1.4.5/angular-route.js&quot;&amp;gt;&amp;lt;/script&amp;gt;
  &amp;lt;script type=&quot;text/javascript&quot;&amp;gt;
    var app = angular.module(&apos;exampleApp&apos;, [&apos;ngRoute&apos;]);
    app.config([&apos;$routeProvider&apos;, function($routeProvider) {
      $routeProvider.when(&apos;/example&apos;, {
        templateUrl: &apos;example.html&apos;
      });
    }]);
  &amp;lt;/script&amp;gt;
  &amp;lt;title&amp;gt;Spring Angular Example&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body ng-app=&quot;exampleApp&quot;&amp;gt;

  My Spring Angular Example!

  &amp;lt;a href=&quot;#/example&quot;&amp;gt;Click here to see Angular routes in action.&amp;lt;/a&amp;gt;

  &amp;lt;div ng-view&amp;gt;&amp;lt;/div&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The routing mechanism uses HTML anchors for &quot;navigation&quot;. You have to add...&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;line 6: a script tag to angular-route.js&lt;/li&gt;
&lt;li&gt;line 9-13: a configuration using &lt;a href=&quot;https://docs.angularjs.org/api/ngRoute/provider/$routeProvider&quot;&gt;$routeProvider&lt;/a&gt; and define the different routes and which template to load&lt;/li&gt;
&lt;li&gt;line 21: a link with the location pattern like &quot;#/DEFINED_ROUTE&quot;&lt;/li&gt;
&lt;li&gt;line 23: a container marked with &lt;a href=&quot;https://docs.angularjs.org/api/ngRoute/directive/ngView&quot;&gt;ng-view&lt;/a&gt; where the template content is loaded into&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With this nice feature, you can create &lt;strong&gt;Single Page Apps&lt;/strong&gt; very easily.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Take a look at &lt;a href=&quot;https://github.com/angular/angular-seed&quot;&gt;Angular Seed @ Github&lt;/a&gt; and get a preconfigured Angular project for common use cases.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Use &lt;a href=&quot;https://nodejs.org/en/&quot;&gt;Node&lt;/a&gt;, &lt;a href=&quot;http://bower.io/&quot;&gt;Bower&lt;/a&gt; and other tools to automate and speed up your producitivity!&lt;/p&gt;
&lt;p&gt;I hope this will help you to setup your next webapp project using Spring and Angular! Next time I will write about the combination of a Spring based backend and Angular frontend, i.e. how to communicate between both.&lt;/p&gt;
</content:encoded></item><item><title>Manipulate a page from a chrome extension</title><link>https://www.sebastianhesse.de/2015/08/09/manipulate-a-page-from-a-chrome-extension/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2015/08/09/manipulate-a-page-from-a-chrome-extension/</guid><description>Build Chrome extensions that manipulate web pages using content scripts and context menus. Learn the message passing system between background and content scripts.</description><pubDate>Sun, 09 Aug 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In the last days I experimented with a small chrome extension and I ran into trouble to manipulate a website which was currently active in the browser. The use case was that a user can right click into an input field, select an entry from the context menu (which was extended by my extension) and then the input field should be filled with some text. First, I thought this would be very easy, but because of Chrome&apos;s security restrictions, it&apos;s not that easy. I would like to explain how I&apos;ve come to a solution:&lt;/p&gt;
&lt;p&gt;If you add a context menu to the right click, you have to add some lines to your manifest.json (&lt;a href=&quot;https://developer.chrome.com/extensions/getstarted&quot;&gt;click here if you don&apos;t know how a Chrome extension is created&lt;/a&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;manifest_version&quot;: 2,
  &quot;name&quot;: &quot;MyExtension&quot;,
  &quot;description&quot;: &quot;Can create a context menu!&quot;,
  &quot;version&quot;: &quot;1.0&quot;,
  &quot;browser_action&quot;: {
    &quot;default_popup&quot;: &quot;popup.html&quot;
  },
  &quot;permissions&quot;: [
    &quot;tabs&quot;,
    &quot;contextMenus&quot;
  ],
  &quot;background&quot;: {
    &quot;scripts&quot;: [
      &quot;js/contextMenus.js&quot;
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What does this manifest.json do? Beside the regular attributes like &lt;em&gt;name&lt;/em&gt; or &lt;em&gt;description&lt;/em&gt;, you set the permission and background files. Both are self-explaining: with the &lt;em&gt;permissions&lt;/em&gt; flag you request rights to the specified modules etc. from the Chrome browser. &lt;a href=&quot;https://developer.chrome.com/extensions/permissions&quot;&gt;This documentation page explains the permissions in general and differences between required and optional permissions.&lt;/a&gt; In my case I needed explicit permissions to &lt;em&gt;tabs&lt;/em&gt; and &lt;em&gt;contextMenus&lt;/em&gt;: with &lt;em&gt;tabs&lt;/em&gt; I can communicate to any other tab and that will be important later when I want to manipulate the page from the current active tab. With &lt;em&gt;contextMenus&lt;/em&gt; my chrome extension will be granted access to the context menu (right click) so that I can add a custom context entry. &lt;a href=&quot;https://developer.chrome.com/extensions/declare_permissions&quot;&gt;Click here if you need other permissions&lt;/a&gt;. The next declaration is the background: It basically defines a background page which is always running after your extension has been installed. So you can define scripts and pages which are always available. In some cases this is not necessary and thus the Chrome documentation proposes &lt;a href=&quot;https://developer.chrome.com/extensions/event_pages&quot;&gt;to use the &quot;persistent&quot;:false flag&lt;/a&gt;. In my case it wasn&apos;t an option because the contextMenu had to react to dynamic changes in the background. However, these are the basics to get you context menu up and running. Let&apos;s see how to add an entry:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;chrome.contextMenus.create({
    &quot;id&quot;: &quot;yourId&quot;,
    &quot;title&quot;: &quot;Context Menu Entry&quot;,
    &quot;contexts&quot;: [&quot;editable&quot;],
    &quot;parentId&quot;: parentId,
    &quot;onclick&quot;: onContextClick
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is quite simple. You set an ID for your menu item, a title, &lt;a href=&quot;https://developer.chrome.com/extensions/contextMenus#type-ContextType&quot;&gt;a context where the item is applied to&lt;/a&gt; (for me input fields, so I define editable as the context element), a parent id (if you have a parent item) and a callback function which is fired if the menu item is clicked (&lt;a href=&quot;https://developer.chrome.com/extensions/contextMenus#event-onClicked&quot;&gt;another option is to add a listener&lt;/a&gt; and &lt;a href=&quot;https://developer.chrome.com/extensions/contextMenus#method-create&quot;&gt;click here to find out other options for contextMenus.create&lt;/a&gt;). Simple as that! Now comes the tricky part: You can&apos;t directly manipulate the site where the context menu click was fired, because the background script is like another page in your Chrome which has some limitations. To get it done, &lt;a href=&quot;http://stackoverflow.com/questions/9429924/chrome-extension-how-do-i-do-modify-dom-of-a-determinate-page-with-contextmenus&quot;&gt;this StackOverflow answer explains the way to go very nice&lt;/a&gt;. There is one exception: The referenced API is deprecated. But it&apos;s quite simple to get it running with the proposed alternatives in the documentation.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function onContextClick(info, tab) {
    var message = info.menuItemId + &quot; was clicked.&quot;;
    chrome.tabs.sendMessage(tab.id, message);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;a href=&quot;https://developer.chrome.com/extensions/contextMenus#method-create&quot;&gt;&lt;em&gt;info&lt;/em&gt; and &lt;em&gt;tab&lt;/em&gt; parameters&lt;/a&gt; give you a lot information which can be used by your scripts. The magic happens in line 3: it sends a message to the tab with the given id (= the one where the context menu was clicked) and hands over your message. For this case you need the tabs permission from above ;) Now the content script from this tab can listen to that message. Wait... You may wonder &quot;Which content script?&quot;. And you&apos;re right! You need to add something more to you &lt;em&gt;manifest.json&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;permissions&quot;: [...],
  &quot;background&quot;: {...},
  &quot;content_scripts&quot;: [
    {
      &quot;matches&quot;: [
        &quot;https://**&quot;
      ],
      &quot;js&quot;: [
        &quot;js/content.js&quot;
      ]
    }
  ]
  // ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;From the &lt;a href=&quot;https://developer.chrome.com/extensions/content_scripts&quot;&gt;Chrome extension documentation&lt;/a&gt;: &quot;Content scripts are JavaScript files that run in the context of web pages.&quot; And with this extended manifest, you tell Chrome that you want to run the given scripts on all pages &lt;a href=&quot;https://developer.chrome.com/extensions/match_patterns&quot;&gt;which match the matches value&lt;/a&gt;. You can also add other scripts, for example jQuery, but consider that they must be in the correct order in terms of loading (jQuery before your content scripts). And now you can listen to your message in &lt;em&gt;content.js&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;chrome.runtime.onMessage.addListener(function(message, sender, sendResponse) {
   // do something with your message object
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s as simple as that!&lt;/p&gt;
&lt;p&gt;Conclusion: The way to go for your Chrome extension is to define one or more content scripts which have the permission to manipulate a web page within Chrome. And if you need information from other scripts of your extension, you should use the Chrome API message events.&lt;/p&gt;
</content:encoded></item><item><title>Add Spring to JavaFX</title><link>https://www.sebastianhesse.de/2015/05/26/add-spring-to-javafx/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2015/05/26/add-spring-to-javafx/</guid><description>Integrate Spring Framework with JavaFX for dependency injection in FXML controllers. Learn the SpringFxmlLoader pattern to combine Spring beans with JavaFX components.</description><pubDate>Tue, 26 May 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Yesterday I wanted to add Spring to my &lt;a href=&quot;https://github.com/seeebiii/PandocGUI&quot;&gt;Pandoc&lt;/a&gt; project and I had a lot of trouble with it. My problem was that I wanted to split my FXML files into multiple files and make each file controlled by a separate controller. This is - without Spring - not a real problem, because you just create your controller classes, add &lt;em&gt;fx:controller=&quot;YourController&quot;&lt;/em&gt; to each FXML file and everything&apos;s fine. But problems arise if you now want to have some objects to be autowired by Spring. I read a lot of tutorials about the topic, but every tutorial just showed the problem if you have only one main controller for your root FXML file. By the way &lt;a href=&quot;http://steveonjava.com/javafx-and-spring-day-1/&quot;&gt;this&lt;/a&gt; and &lt;a href=&quot;http://www.oracle.com/technetwork/articles/java/zonski-1508195.html&quot;&gt;this&lt;/a&gt; are nice tutorials to get in touch with the problem.&lt;/p&gt;
&lt;p&gt;The initial situation:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;BorderPane xmlns=&quot;http://javafx.com/javafx/null&quot; 
xmlns:fx=&quot;http://javafx.com/fxml/1&quot; 
fx:controller=&quot;MainController&quot;&amp;gt;
    
    &amp;lt;left&amp;gt;
        &amp;lt;fx:include source=&quot;left.fxml&quot; /&amp;gt;
    &amp;lt;/left&amp;gt;

    &amp;lt;center&amp;gt;
        &amp;lt;!-- your content is here --&amp;gt;
    &amp;lt;/center&amp;gt;

    &amp;lt;right&amp;gt;
        &amp;lt;fx:include source=&quot;right.fxml&quot; /&amp;gt;
    &amp;lt;/right&amp;gt;
&amp;lt;/BorderPane&amp;gt;

&amp;lt;GridPane xmlns=&quot;http://javafx.com/javafx/null&quot; 
xmlns:fx=&quot;http://javafx.com/fxml/1&quot; 
fx:controller=&quot;OtherController&quot;&amp;gt;

&amp;lt;!-- other elements which are controlled by another controller --&amp;gt;

&amp;lt;/GridPane&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The right.fxml is analogue to left.fxml.&lt;/p&gt;
&lt;p&gt;The problem was that if you have a controller which is created by JavaFX, only the @FXML annotated fields contain injected data. If you add Spring DI in a simple stupid way, Spring would create a second object and this object would only contain injected fields which are annotated with Spring annotations. So you need to combine both.&lt;/p&gt;
&lt;p&gt;Finally I came to the following solution:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public class Main extends Application {

    @Override
    public void start(Stage primaryStage) throws Exception {
        AnnotationConfigApplicationContext context
                = new AnnotationConfigApplicationContext(AppConfiguration.class);

        SpringFxmlLoader loader = new SpringFxmlLoader(context);
        Parent parent = (Parent) loader.load(&quot;/fxml/main.fxml&quot;);
        primaryStage.setScene(new Scene(parent, 1000, 900));
        primaryStage.setTitle(&quot;Your Title&quot;);
        primaryStage.show();
    }

    public static void main(String[] args) {
        launch(args);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This class is the entry point for the application. It creates the Spring application context by using the annotation based approach and calls a SpringFxmlLoader class to create the JavaFX application and return the root/parent object for it.&lt;/p&gt;
&lt;p&gt;The Spring configuration class is very simple. For the sake of simplicity I provide it here for you:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@Configuration
@ComponentScan(&quot;your.package.to.scan&quot;)
public class AppConfiguration {

   /\* ... \*/
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, it is very simple. It scans my project for beans by setting the @ComponentScan annotation. You can add some more configuration if you need to. I recommend to read the Spring docu about annotation based configuration.&lt;/p&gt;
&lt;p&gt;Now here comes the tricky part which costs a lot of pain:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public class SpringFxmlLoader {

    private ApplicationContext context;

    public SpringFxmlLoader(ApplicationContext appContext) {
        this.context = appContext;
    }

    /**
     * Loads the root FXML file and uses Spring&apos;s context to get controllers.
     *
     * @param resource location of FXML file
     * @return parent object of FXML layout, see {@link FXMLLoader#load(InputStream)}
     * @throws IOException in case of problems with FXML file
     */
    public Object load(final String resource) throws IOException {
        try (InputStream fxmlStream = getClass().getResourceAsStream(resource)) {
            FXMLLoader loader = new FXMLLoader();
            // set location of fxml files to FXMLLoader
            URL location = getClass().getResource(resource);
            loader.setLocation(location);
            // set controller factory
            loader.setControllerFactory(context::getBean);
            // load FXML
            return loader.load(fxmlStream);
        } catch (BeansException e) {
            throw new RuntimeException(e);
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The try-catch is created by creating a stream to read the FXML file. Then the FXMLLoader object is created. Two/three very important lines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;with &lt;code&gt;loader.setLocation(location)&lt;/code&gt; you can set the location of your FXML file, because FXMLLoader can&apos;t get it automatically. But this is only important if you specify some fx:include in your main FXML file. This explains it very nice: http://praxisit.de/fxinclude/ only in german, but basically you&apos;ll get this exception if you don&apos;t set the location and FXMLLoader is searching for the included FXML files:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;javafx.fxml.LoadException: Base location is undefined.
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;loader.setControllerFactory(context::bean)&lt;/code&gt; Thanks to Java 8 that this is a one-liner! Otherwise you would have to implement a Callback interface like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;loader.setControllerFactory(new Callback&amp;lt;Class&amp;lt;?&amp;gt;, Object&amp;gt;() {
    @Override
    public Object call(final Class&amp;lt;?&amp;gt; param) {
        return context.getBean(param);
    }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And this is the tricky point which none of the tutorials I&apos;ve read is handling. The method setControllerFactory is used if FXMLLoader notices a controller which has to be instantiated (e.g. for your main.fxml). With this method, it tries to get a bean from the implemented controller factory first. If this fails, it creates a bean/object by itself. Read &lt;a href=&quot;https://docs.oracle.com/javase/8/javafx/fxml-tutorial/jfx2_x-features.htm&quot;&gt;Customizable controller instantiation&lt;/a&gt; for additional information. The other tutorials usually use the interface and write&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  // they get a controller bean from Spring context or the controller class from somewhere else; then do:
  
  return controller;
  
  // OR
  
  return context.getBean(controllerClass);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;which means they ignore param of the callback method (param can be equals to MainController or OtherController in this example) and thus only get the same controller back.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In order to run your application, you need to add your MainController and OtherController to Spring context, e.g. add &lt;code&gt;@Service&lt;/code&gt; to the class.&lt;/p&gt;
&lt;p&gt;At the moment I don&apos;t know if this solution works fine if you have multiple dialogs/windows for you application, but in case you only have one and only multiple controllers + FXML files, this is one solution for it.&lt;/p&gt;
</content:encoded></item><item><title>First steps with JavaFX</title><link>https://www.sebastianhesse.de/2015/05/23/first-steps-with-javafx/</link><guid isPermaLink="true">https://www.sebastianhesse.de/2015/05/23/first-steps-with-javafx/</guid><description>Learn JavaFX basics with FXML layouts and controllers in this step-by-step tutorial. Build your first JavaFX application using BorderPane, GridPane, and event handling.</description><pubDate>Sat, 23 May 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Today I want to write about my first steps with JavaFX and try to go through my first application step by step. I only provide some examples and in this case it&apos;s not a copy-and-run example! You must read the documentation which is mentioned below. In order to see the full code of the application, browse the sources in my GitHub project: &lt;a href=&quot;https://github.com/seeebiii/PandocGUI&quot;&gt;https://github.com/seeebiii/PandocGUI&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This week I had a small problem: I had to convert some wiki pages from a GitHub project from markdown format to another format like &lt;code&gt;*.docx&lt;/code&gt;. So, I remembered a friend told me a few weeks ago about Pandoc which can convert a lot of documentation formats to a lot more of documentation formats. I gave it a try: searched for &quot;Pandoc download&quot; and found an installer for Windows. Unfortunately I&apos;m a person who prefers GUIs, thus I was a little bit disappointed as I realized that I had to use the console for conversion. But after converting I was fascinated that it worked pretty well and fast, I only had to do a few changes, because if you want to convert multiple input files, Pandoc merges all input files together to one file. A few hours later I decided to create a GUI and started a JavaFX project.&lt;/p&gt;
&lt;p&gt;After reading &lt;a href=&quot;http://docs.oracle.com/javase/8/javafx/layout-tutorial/builtin_layouts.htm#JFXLY102&quot;&gt;this documentation&lt;/a&gt; about the basic layout which consists of one or more panes, it was pretty easy to program a working JavaFX application. I decided to describe the layout via FXML (take a look &lt;a href=&quot;http://docs.oracle.com/javase/8/javafx/get-started-tutorial/hello_world.htm&quot;&gt;here to get started with an application&lt;/a&gt; and &lt;a href=&quot;http://docs.oracle.com/javase/8/javafx/get-started-tutorial/fxml_tutorial.htm#CHDCCHII&quot;&gt;read this to know how to create FXML files&lt;/a&gt;), because this is straight forward:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&amp;gt;

&amp;lt;?import java.net.\*?&amp;gt;
&amp;lt;?import javafx.geometry.\*?&amp;gt;
&amp;lt;?import javafx.scene.control.\*?&amp;gt;
&amp;lt;?import javafx.scene.layout.\*?&amp;gt;

&amp;lt;BorderPane prefHeight=&quot;400.0&quot; prefWidth=&quot;550.0&quot; xmlns=&quot;http://javafx.com/javafx/null&quot; xmlns:fx=&quot;http://javafx.com/fxml/1&quot;
            fx:controller=&quot;de.sebastianhesse.pandocgui.Controller&quot;&amp;gt;

    &amp;lt;center&amp;gt;
         &amp;lt;!-- Here is your main content in the center, like text fields and the list view later --&amp;gt;
    &amp;lt;/center&amp;gt;
    
    &amp;lt;bottom&amp;gt;
         &amp;lt;!-- Here is your bottom line --&amp;gt;
    &amp;lt;/bottom&amp;gt;
&amp;lt;/BorderPane&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This simple layout creates a bordered layout with only a center and a bottom box. If you don&apos;t specify top, right or left, they won&apos;t be displayed. The attribute &lt;code&gt;fx:controller=&quot;...&quot;&lt;/code&gt; configures a controller class which is associated with this FXML file. I will come to that later again.&lt;/p&gt;
&lt;p&gt;Next to do is to create some elements like text fields, buttons, text areas and a list view. But before doing so, you need to think how you want to arrange them. I decided to use a GridPane and order everything in a table-like manner with rows and columns:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;center&amp;gt;
    &amp;lt;GridPane&amp;gt;
            &amp;lt;!-- Pandoc executable location --&amp;gt;
            &amp;lt;Label text=&quot;Your Pandoc executable location: &quot; GridPane.rowIndex=&quot;0&quot; /&amp;gt;
            &amp;lt;HBox maxWidth=&quot;500&quot; GridPane.rowIndex=&quot;1&quot;&amp;gt;
                &amp;lt;TextField fx:id=&quot;pandocLocation&quot; prefWidth=&quot;400&quot; GridPane.columnIndex=&quot;0&quot;/&amp;gt;
                &amp;lt;Button onAction=&quot;#openPandocLocationFileDialog&quot; text=&quot;Select&quot; GridPane.columnIndex=&quot;1&quot;/&amp;gt;
            &amp;lt;/HBox&amp;gt;

    &amp;lt;/GridPane&amp;gt;
&amp;lt;/center&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code listing shows a GridPane containing a label element and an hbox containing a text field and a button. An &lt;strong&gt;h&lt;/strong&gt;box only structures its elements in a &lt;strong&gt;h&lt;/strong&gt;orizontal way and &lt;strong&gt;v&lt;/strong&gt;box &lt;strong&gt;v&lt;/strong&gt;ertically, respectively. To stick the elements to the GridPane, you set an attribute as &lt;code&gt;GridPane.*&lt;/code&gt; to your element, e.g. &lt;code&gt;GridPane.rowIndex=&quot;1&quot;&lt;/code&gt;. This sticks your element to the first row. You can also define them to a column so that you have you element in one special cell.&lt;/p&gt;
&lt;p&gt;You can now fill the other rows and columns, but this is omitted here, you can look up the sources if you want to.&lt;/p&gt;
&lt;p&gt;You probably noticed the &lt;strong&gt;marked lines&lt;/strong&gt;. These lines contain two special attributes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;fx:id=&quot;pandocLocation&quot;&lt;/code&gt; -&amp;gt; Of course, as the keyword &lt;code&gt;id&lt;/code&gt; indicates this identifies the element. But it has something more to offer: In the controller which is linked to the root pane you can create a class attribute with the same name and type and annotate it with &lt;code&gt;@FXML&lt;/code&gt; and your dependency will be injected :) See example below.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;onAction=&quot;#openPandocLocationFileDialog&quot;&lt;/code&gt; -&amp;gt; Also very intuitive, the attribute &lt;em&gt;onAction&lt;/em&gt; defines the name of an action to call and this is a method which must be placed into the controller.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let&apos;s take a look at the example controller:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public class Controller {

    @FXML
    private Stage stage;
    @FXML
    private TextField pandocLocation;

    /**
     * FileChooser for Pandoc executable location
     */
    private FileChooser pandocLocationFileChooser = new FileChooser();

    /**
     * Opens a file dialog for location of Pandoc executable. Stores path in {@link #pandocLocation}.
     */
    @FXML
    public void openPandocLocationFileDialog() {
        File pandocExecutable = this.pandocLocationFileChooser.showOpenDialog(this.stage);
        if (null != pandocExecutable) {
            this.pandocLocation.setText(pandocExecutable.getPath());
        }
    }
    /\* .... \*/
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There is not a lot to explain: the Stage attribute is something like the root element for JavaFX applications, so you must specify it when you need a file dialog within the application. The FileChooser is a class to get a file dialog from. And &lt;code&gt;@FXML&lt;/code&gt; annotates methods and fields to call and inject into your controller - very easy stuff!&lt;/p&gt;
&lt;p&gt;This controller would be enough if you only want to see a text field, a button to open a file dialog to select the Pandoc executable and store the file path into the text field. The onAction method is called directly, you don&apos;t have to create any listeners or something like that :)&lt;/p&gt;
</content:encoded></item></channel></rss>