aws-cdk
https://github.com/aws/aws-cdk
https://deepwiki.com/aws/aws-cdk
https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md
https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md
https://github.com/aws/aws-cdk/tree/main/packages/%40aws-cdk/integ-runner#update-workflow
--disable-update-workflow
https://github.com/aws/aws-cdk/pulls/tmokmss
https://github.com/aws/aws-cdk/pulls?q=is%3Apr+author%3Asakurai-ryo+is%3Aclosed
https://github.com/aws/aws-cdk/pulls?q=is%3Apr+author%3Atmyoda+is%3Aclosed
code: zsh
yarn install
code: zsh
aws-eks-v2-alpha git:(main) yarn build
code: zsh
NODE_OPTIONS="--max-old-space-size=12288" npx lerna run build
code: zsh
NODE_OPTIONS="--max-old-space-size=12288" npx lerna run build --scope=aws-cdk-lib
code: zsh
NODE_OPTIONS="--max-old-space-size=12288" npx lerna run build --scope=@aws-cdk-testing/framework-integ
code: zsh
NODE_OPTIONS="--max-old-space-size=12288" npx lerna run build --scope=@aws-cdk/aws-eks-v2-alpha
code: zsh
--skip-nx-cache
code: zsh
error code 2
code: zsh
yarn cache clean
--skip-nx-cache
code: zsh
Error: /workspaces/aws-cdk/node_modules/eslint/bin/eslint.js . --ext=.ts --resolve-plugins-relative-to=/workspaces/aws-cdk/tools/@aws-cdk/cdk-build-tools/lib exited with error code 137
code: devcontainer.json
"hostRequirements": {
"memory": "14gb"
},
"runArgs": [
"--memory=14g"
],
https://qiita.com/ragi_chanchan/items/07945231274c505285b3
code: zsh
~/Desktop/aws-cdk/packages/aws-cdk-lib/aws-s3/test
$ NODE_OPTIONS="--max-old-space-size=8192" yarn build ./util.test.ts
~/Desktop/aws-cdk/packages/aws-cdk-lib/aws-s3/test
$ yarn test util
~/Desktop/aws-cdk/packages/aws-cdk-lib/aws-s3/test
$ yarn lint ./util.test.ts
code: zsh
➜ aws-cdk-lib git:(main) yarn test aws-opensearchservice
code: zsh
/Desktop/aws-cdk/packages/aws-cdk-lib/aws-eks/test
$ yarn test -t "outputs are synthesized by default"
code: zsh
aws-cdk-lib git:(main) npx tsc ./aws-ecs-patterns/test/fargate/load-balanced-fargate-service-v2.test.ts
失敗したら yarn install
buildしている間は動かさない
code: zsh
error Error: ENOSPC: no space left on device, write
code: zsh
➜ test git:(eks-objname) df -H
Filesystem Size Used Avail Use% Mounted on
none 8.4G 8.4G 0 100% /tmp
➜ test git:(eks-objname) sudo mount -o remount,size=64G /tmp
➜ test git:(eks-objname) df -H
Filesystem Size Used Avail Use% Mounted on
none 69G 8.4G 61G 13% /tmp
code: zsh
➜ aws-cdk-lib git:(ten-n-one-or) git push origin ten-n-one-or
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Remove Containerからpushしない
ログは残る
code: zsh
$ git log
commit 7d2c99e092730aadfc8725582ab9d39da917990d (HEAD -> ten-n-one-or)
Author: wafuwafu13 <jaruwafu@gmail.com>
Date: Tue Jun 6 21:49:57 2023 +0000
fix: drop the last wrapping Fn::Or
code: zsh
➜ test git:(main) yarn test auto-scaling-group.test.js
yarn run v1.22.19
$ jest auto-scaling-group.test.js
No tests found, exiting with code 1
Run with --passWithNoTests to exit with code 0
In /workspaces/aws-cdk/packages/aws-cdk-lib
8479 files checked.
testMatch: /workspaces/aws-cdk/packages/aws-cdk-lib/**/test/**/?(*.)+(test).ts - 697 matches
testPathIgnorePatterns: /node_modules/ - 8479 matches
testRegex: - 0 matches
Pattern: auto-scaling-group.test.js - 0 matches
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
➜ test git:(main) yarn test auto-scaling-group
yarn run v1.22.19
$ jest auto-scaling-group
PASS aws-autoscaling/test/auto-scaling-group.test.ts (20.474 s)
code: zsh
➜ framework-integ git:(asg-instance-requirements) export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
npx lerna run build --scope=@aws-cdk-testing/framework-integ
npm install -g aws-cdk
cdk bootstrap aws://xxx/ap-northeast-1
yarn integ aws-ecs-patterns/test/fargate/integ.alb-foo.js --update-on-failed --parallel-regions eu-west-1
.lock 削除, --skip-nx-cache 意味ないので新しくファイルを作成して試す
cdk-hnb659fds-assets- 空にする
.snapshot フォルダ削除
code: zsh
~/Desktop/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-eks
$ yarn integ aws-eks/test/integ.alb-controller-authapi.js
Exemption Request
https://github.com/aws/aws-cdk/pull/25999#issuecomment-1593701604
CDK コントリビュートデビューしてからの8ヶ月弱を振り返る
AWS CDKにおけるLazyの使い所
issue
型
code: ts
export interface FlagInfoBase {
/** Single-line description for the flag */
readonly summary: string;
/** Detailed description for the flag (Markdown) */
readonly detailsMd: string;
/**
* Version number the flag was introduced in each version line.
*
* undefined means flag is not configurable in that line; but if
* unconfiguredBehavesLike is set for that line, we will document the default
* behavior (even though it's not configurable).
*/
readonly introducedIn: { v1?: string; v2?: string };
/** What you would like new users to set this flag to (default in new projects) */
readonly recommendedValue: any;
/**
* If this flag is not set, the CLI library will behave as if the flag was set to <this>.
*
* If this flag is not set, we will assume you meant false, and the recommendedValue is true.
*
* This value is most useful for flags that allow opting out of undesirable behavior. To avoid having
* to word our flag name like skipUndesirableBehavior and people having to do boolean gymnastics in
* their head, we will name the flag doUndesirableBehavior, set
* unconfiguredBehavesLike: true, and recommendedValue: false.
*
* Again: the value you put here should describe whatever value gets us the
* legacy behavior, from before this flag was introduced.
*/
readonly unconfiguredBehavesLike?: { v1?: any; v2?: any };
}
issue
packages/aws-cdk-lib/aws-eks/README.md
Migrating from ConfigMap to Access Entry
code: md
- Access Entry(#access-entry)
- Migrating from ConfigMap to Access Entry(#migrating-from-configmap-to-access-entry)
issue
(aws-eks): Support --take-ownership flag in new helm version
https://github.com/aws/aws-cdk/pull/32981/files
https://github.com/helm/helm/pull/13439
code: zsh
$ git stash
Saved working directory and index state WIP on main: e27cd2ca60 revert(ec2): support Firehose IDeliveryStream as flow log destination (#34592)
https://github.com/helm/helm/pull/13439#pullrequestreview-2494359393
code: zsh
$ helm upgrade --install foo foo/
Release "foo" does not exist. Installing it now.
Error: Unable to continue with install: ServiceAccount "foo" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "foo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
$ helm upgrade --install foo foo/ --take-ownership
Release "foo" does not exist. Installing it now.
NAME: foo
LAST DEPLOYED: Mon Jun 2 19:58:41 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=foo,app.kubernetes.io/instance=foo" -o jsonpath="{.items0.metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers0.ports0.containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
foo default 1 2025-06-02 19:58:41.762233 +0100 IST deployed foo-0.1.0 1.16.0
issue
Amplify buildSpec loading from a local yaml file using the from_asset function errors out on requiring scope
https://github.com/aws/aws-cdk/blob/e27cd2ca60d8249cb122c1be525ee9db6a4cfd7d/packages/%40aws-cdk/aws-amplify-alpha/lib/app.ts#L282
code: ts
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as codebuild from 'aws-cdk-lib/aws-codebuild';
export class CdksampleStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const spec = codebuild.BuildSpec.fromAsset('build.yml')
const build = spec.toBuildSpec()
}
}
code: zsh
Error: AssetBuildSpec requires a scope argument
code: ts
const build = spec.toBuildSpec(scope)
code: zsh
Error: Asset at 'Code' should be created in the scope of a Stack, but no Stack found
code: ts
const build = spec.toBuildSpec(this)
code: zsh
TypeError: Cannot read properties of undefined (reading 'addToPrincipalPolicy')
issue
DockerImageAsset.buildSsh should allow arrays
https://github.com/aws/aws-cdk/pull/26846/files
https://github.com/aws/aws-cdk/pull/26356/files
issue
batch: support FireLens
issue
hasWarning 警察
issue
stepfunctions: Map.ItemSelector doesn't accept JSONata
aws_stepfunction_tasks: BatchSubmitJob task doesn't support JSONata expression for arraySize
test(stepfunction): improve test coverage for json-path.ts
issue
(aws-eks): kubectl objectName is required by CDK, but not always by kubectl get
fix(eks): make objectName optional in KubernetesObjectValue
code: ts
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as eks from 'aws-cdk-lib/aws-eks';
import { KubectlV33Layer } from '@aws-cdk/lambda-layer-kubectl-v33';
export class CdkPlaygroundStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const cluster = eks.Cluster.fromClusterAttributes(this, 'ExistingCluster', {
clusterName: 'dub0602',
kubectlRoleArn: 'arn:aws:iam::534165444269:user/herotaka',
kubectlLayer: new KubectlV33Layer(this, 'KubectlLayer')
});
const podInfo = new eks.KubernetesObjectValue(this, 'PodInfo', {
cluster: cluster,
objectType: 'pod',
objectName: 'foo-867d85488b-8lrj7',
objectNamespace: 'default',
jsonPath: '.metadata.name',
});
console.log(podInfo.value, '+++++++++++++++++++')
new cdk.CfnOutput(this, 'FirstPodName', {
value: podInfo.value,
description: 'Name of the first pod in the namespace',
});
}
}
code: zsh
PodInfo50615348
CREATE_FAILED
Likely root cause
-
Response object is too long.
code: typescript
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as eks from 'aws-cdk-lib/aws-eks';
import * as iam from 'aws-cdk-lib/aws-iam';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
export class CdkPlaygroundStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const clusterAdmin = new iam.Role(this, 'AdminRole', {
assumedBy: new iam.AccountRootPrincipal()
});
const cluster = new eks.Cluster(this, 'cluster', {
clusterName: 'cluster',
mastersRole: clusterAdmin,
version: eks.KubernetesVersion.V1_32,
kubectlLayer: new KubectlV32Layer(this, 'KubectlLayer')
});
const ingressControllerChart = cluster.addHelmChart('IngressController', {
chart: 'nginx-ingress',
repository: 'https://helm.nginx.com/stable',
release: 'ingress-controller',
});
const albAddress = new eks.KubernetesObjectValue(this, 'elbAddress', {
cluster,
objectType: 'Service',
objectName: ${'ingress-controller'}-nginx-ingress,
jsonPath: '.status.loadBalancer.ingress0.hostname',
});
console.log(albAddress.value, '+++++++++++++++++++')
new cdk.CfnOutput(this, 'FirstPodName', {
value: albAddress.value,
description: 'Name of the first pod in the namespace',
});
}
}
code: zsh
Received response status FAILED from custom resource. Message returned: Error: Timeout waiting for output from kubectl command: ['get', '-n', 'default', 'Service', 'ingress-controller-nginx-ingress', "-o=jsonpath='{.status.loadBalancer.ingress0.hostname}'"] (last_error=b'Error from server (NotFound): services "ingress-controller-nginx-ingress" not found\n') Logs: /aws/lambda/CdkPlaygroundStack-awscdkawseksKub-Handler886CB40B-cMPasOjWX9D0 at invokeUserFunction (/var/task/framework.js:2:6) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async onEvent (/var/task/framework.js:1:369) at async Runtime.handler (/var/task/cfn-response.js:1:1837)
code: zsh
~/Desktop/cdk_playground
$ helm install ingress-controller nginx-stable/nginx-ingress
NAME: ingress-controller
LAST DEPLOYED: Fri Jun 27 17:20:46 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
NGINX Ingress Controller 5.0.0 has been installed.
For release notes for this version please see: https://docs.nginx.com/nginx-ingress-controller/releases/
Installation and upgrade instructions: https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/
~/Desktop/cdk_playground
$ kubectl get services -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
foo ClusterIP 10.100.186.95 <none> 80/TCP 24d
ingress-controller-nginx-ingress-controller LoadBalancer 10.100.147.2 a54e2fb6e5c6a450a80c1402ce89a278-1858791775.eu-west-1.elb.amazonaws.com 80:31726/TCP,443:31606/TCP 74s
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 24d
~/Desktop/cdk_playground
$ kubectl get -n default service ingress-controller-nginx-ingress-controller -o=jsonpath='{.status.loadBalancer.ingress0.hostname}'
a54e2fb6e5c6a450a80c1402ce89a278-1858791775.eu-west-1.elb.amazonaws.com%
$ kubectl get -n default services -o=jsonpath='{.items0.metadata.name}'
foo%
code: ts
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as eks from 'aws-cdk-lib/aws-eks';
import * as iam from 'aws-cdk-lib/aws-iam';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
export class CdkPlaygroundStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const clusterAdmin = new iam.Role(this, 'AdminRole', {
assumedBy: new iam.AccountRootPrincipal()
});
const cluster = new eks.Cluster(this, 'cluster', {
clusterName: 'cluster',
mastersRole: clusterAdmin,
version: eks.KubernetesVersion.V1_32,
kubectlLayer: new KubectlV32Layer(this, 'KubectlLayer')
});
const ingressControllerChart = cluster.addHelmChart('IngressController', {
chart: 'nginx-ingress',
repository: 'https://helm.nginx.com/stable',
release: 'ingress-controller',
});
const albAddress = new eks.KubernetesObjectValue(this, 'elbAddress', {
cluster,
objectType: 'service',
objectName: ingress-controller-nginx-ingress-controller,
jsonPath: '.status.loadBalancer.ingress0.hostname',
});
console.log(albAddress.value, '+++++++++++++++++++')
new cdk.CfnOutput(this, 'FirstPodName', {
value: albAddress.value,
description: 'Name of the first pod in the namespace',
});
}
}
Outputs:
CdkPlaygroundStack.FirstPodName = a0f869aed77a24a79a743017b89b2849-1365657121.eu-west-1.elb.amazonaws.com
code: ts
/// !cdk-integ pragma:disable-update-workflow
import * as iam from 'aws-cdk-lib/aws-iam';
import { App, CfnOutput, Stack, StackProps } from 'aws-cdk-lib';
import * as integ from '@aws-cdk/integ-tests-alpha';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
import * as eks from '../lib';
class EksClusterStack extends Stack {
private cluster: eks.Cluster;
constructor(scope: App, id: string, props?: StackProps) {
super(scope, id, props);
const mastersRole = new iam.Role(this, 'AdminRole', {
assumedBy: new iam.AccountRootPrincipal(),
});
this.cluster = new eks.Cluster(this, 'Cluster', {
mastersRole,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV32Layer(this, 'kubectlLayer'),
},
});
this.cluster.addHelmChart('IngressController', {
chart: 'nginx-ingress',
repository: 'https://helm.nginx.com/stable',
release: 'ingress-controller',
});
const elbAddress = new eks.KubernetesObjectValue(this, 'elbAddress', {
cluster: this.cluster,
objectType: 'service',
objectName: 'ingress-controller-nginx-ingress-controller',
jsonPath: '.status.loadBalancer.ingress0.hostname',
});
const serviceName = new eks.KubernetesObjectValue(this, 'serviceName', {
cluster: this.cluster,
objectType: 'services',
jsonPath: '.items0.metadata.name',
});
new CfnOutput(this, 'ELBAddress', { value: elbAddress.value });
new CfnOutput(this, 'ServiceName', { value: serviceName.value });
}
}
const app = new App({
postCliContext: {
'@aws-cdk/aws-lambda:createNewPoliciesWithAddToRolePolicy': true,
},
});
const stack = new EksClusterStack(app, 'aws-cdk-eks-k8s-object-value');
new integ.IntegTest(app, 'aws-cdk-eks-k8s-object-value-integ', {
testCases: stack,
// Test includes assets that are updated weekly. If not disabled, the upgrade PR will fail.
diffAssets: false,
cdkCommandOptions: {
deploy: {
args: {
rollback: true,
},
},
},
});
code: zsh
ERROR RuntimeError: Timeout waiting for output from kubectl command: ['get', '-n', 'default', 'service', 'ingress-controller-nginx-ingress-controller', "-o=jsonpath='{.status.loadBalancer.ingress0.hostname}'"] (last_error=b'E0627 19:27:43.493322 192 memcache.go:265] "Unhandled Error" err="couldn\'t get current server API group list: the server has asked for the client to provide credentials"\nE0627 19:27:44.487279 192 memcache.go:265] "Unhandled Error" err="couldn\'t get current server API group list: the server has asked for the client to provide credentials"\nE0627 19:27:45.470773 192 memcache.go:265] "Unhandled Error" err="couldn\'t get current server API group list: the server has asked for the client to provide credentials"\nE0627 19:27:46.453872 192 memcache.go:265] "Unhandled Error" err="couldn\'t get current server API group list: the server has asked for the client to provide credentials"\nE0627 19:27:47.447300 192 memcache.go:265] "Unhandled Error" err="couldn\'t get current server API group list: the server has asked for the client to provide credentials"\nerror: You must be logged in to the server (the server has asked for the client to provide credentials)\n') Traceback (most recent call last): File "/var/task/index.py", line 23, in handler return get_handler(event, context) File "/var/task/get/__init__.py", line 48, in get_handler output = wait_for_output(cmd, int(timeout_seconds)) File "/var/task/get/__init__.py", line 73, in wait_for_output raise RuntimeError(f'Timeout waiting for output from kubectl command: {args} (last_error={error})')
issue
https://github.com/aws/aws-cdk/blob/393919f760a9257886536e37596dbac6119a9acc/packages/%40aws-cdk/aws-eks-v2-alpha/lib/cluster.ts#L180
spot ない
docs(eks-v2-alpha): remove spot instance related documentation
issue
fix(opensearchservice): create AWS::Logs::ResourcePolicy instead of Custom::CloudwatchLogResourcePolicy
https://github.com/aws/aws-cdk/blob/54e822284df3ae24dd00c30a84be0cf90bfda408/packages/aws-cdk-lib/aws-opensearchservice/lib/domain.ts#L1880-L1886
Resource Policy
https://github.com/aws/aws-cdk/issues/5343
https://github.com/aws/aws-cdk/pull/17015/files
code: zsh
➜ test git:(opensearchservice-logresourcepolicy) yarn integ aws-opensearchservice/test/integ.opensearch.js
yarn run v1.22.22
$ integ-runner --language javascript aws-opensearchservice/test/integ.opensearch.js
Verifying integration test snapshots...
CHANGED aws-opensearchservice/test/integ.opensearch 3.515s
IAM Statement Changes
┌───┬──────────┬────────┬───────────────────────────┬───────────────────────────────────────────────────────────────┬───────────┐
│ │ Resource │ Effect │ Action │ Principal │ Condition │
├───┼──────────┼────────┼───────────────────────────┼───────────────────────────────────────────────────────────────┼───────────┤
│ - │ * │ Allow │ logs:DeleteResourcePolicy │ AWS:${AWS679f53fac002430cb0da5b7982bd2287ServiceRoleC1EA0FF2} │ │
│ │ │ │ logs:PutResourcePolicy │ │ │
│ - │ * │ Allow │ logs:DeleteResourcePolicy │ AWS:${AWS679f53fac002430cb0da5b7982bd2287ServiceRoleC1EA0FF2} │ │
│ │ │ │ logs:PutResourcePolicy │ │ │
└───┴──────────┴────────┴───────────────────────────┴───────────────────────────────────────────────────────────────┴───────────┘
(NOTE: There may be security-related changes not in this list. See https://github.com/aws/aws-cdk/issues/1299)
Resources
- Custom::CloudwatchLogResourcePolicy Domain1ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fc3D726D58 destroy
- AWS::IAM::Policy Domain1ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fcCustomResourcePolicyBE9BFE5D destroy
- Custom::CloudwatchLogResourcePolicy Domain2ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca0286FF1B15 destroy
- AWS::IAM::Policy Domain2ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02CustomResourcePolicy2DB46870 destroy
+ AWS::Logs::ResourcePolicy Domain1ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fcResourcePolicy2D698D90
+ AWS::Logs::ResourcePolicy Domain2ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02ResourcePolicy0B724384
~ AWS::OpenSearchService::Domain Domain19FCBCB91
└─ ~ DependsOn
└─ @@ -1,4 +1,3 @@
[
- "Domain1ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fc3D726D58",
- "Domain1ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fcCustomResourcePolicyBE9BFE5D"
+ "Domain1ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fcResourcePolicy2D698D90"
]
~ AWS::OpenSearchService::Domain Domain2644FE48C
└─ ~ DependsOn
└─ @@ -1,4 +1,3 @@
[
- "Domain2ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca0286FF1B15",
- "Domain2ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02CustomResourcePolicy2DB46870"
+ "Domain2ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02ResourcePolicy0B724384"
]
code: zsh
Failed resources:
cdk-integ-opensearch | 5:36:20 PM | CREATE_FAILED | AWS::Logs::ResourcePolicy | Domain2/ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02/ResourcePolicy (Domain2ESLogGroupPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02ResourcePolicy0B724384) Resource handler returned message: "Resource of type 'AWS::Logs::ResourcePolicy' with identifier '{"/properties/PolicyName":"ESLogPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02"}' already exists." (RequestToken: c2034978-34da-b01c-5a87-839c81101493, HandlerErrorCode: AlreadyExists)
cdk-integ-opensearch | 5:36:20 PM | CREATE_FAILED | AWS::Logs::ResourcePolicy | Domain1/ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fc/ResourcePolicy (Domain1ESLogGroupPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fcResourcePolicy2D698D90) Resource handler returned message: "Resource of type 'AWS::Logs::ResourcePolicy' with identifier '{"/properties/PolicyName":"ESLogPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fc"}' already exists." (RequestToken: fbf7533a-5022-9ba7-c265-d65d60568cae, HandlerErrorCode: AlreadyExists)
❌ cdk-integ-opensearch failed: _ToolkitError: The stack named cdk-integ-opensearch failed to deploy: UPDATE_ROLLBACK_COMPLETE: Resource handler returned message: "Resource of type 'AWS::Logs::ResourcePolicy' with identifier '{"/properties/PolicyName":"ESLogPolicyc80140a7754e9c0dd4e81167ef19e15da5b55dca02"}' already exists." (RequestToken: c2034978-34da-b01c-5a87-839c81101493, HandlerErrorCode: AlreadyExists), Resource handler returned message: "Resource of type 'AWS::Logs::ResourcePolicy' with identifier '{"/properties/PolicyName":"ESLogPolicyc881416c4fcb1ec2b4bf7f47a5cde4097f01ec50fc"}' already exists." (RequestToken: fbf7533a-5022-9ba7-c265-d65d60568cae, HandlerErrorCode: AlreadyExists)
FAILED aws-opensearchservice/test/integ.opensearch-OpenSearchInteg/DefaultTest (undefined/eu-west-1) 1618.039s
Integration test failed: Error: Command exited with status 1
Test Results:
Tests: 1 failed, 1 total
issue
https://github.com/aws/aws-cdk/blob/64662b2225dce18ee2da0eb46331a3a0155ecfb5/packages/aws-cdk-lib/aws-eks/lib/cluster.ts#L1829-L1841
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks.Cluster.html#mastersrole
An IAM role that will be added to the system:masters Kubernetes RBAC group.
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html#masters-role
When you create a cluster, you can specify a mastersRole. The Cluster construct will associate this role with the system:masters RBAC group, giving it super-user access to the cluster.
issue
(EKS): outputConfigCommand does not work any more without mastersRole
https://github.com/aws/aws-cdk/pull/33673
https://github.com/aws/aws-cdk/pull/34539
hasWarning
code: cluster.ts
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as eks from 'aws-cdk-lib/aws-eks';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
// import * as sqs from 'aws-cdk-lib/aws-sqs';
export class CdksampleStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_32,
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
});
}
}
code: cloudtral
"eventName": "CreateCluster",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAXYXV522W3YH2NCH4S:AWSCDK.EKSCluster.Create.xxx",
"arn": "arn:aws:sts::xxx:assumed-role/CdksampleStack-helloeksCreationRolexxx/AWSCDK.EKSCluster.Create.xxx",
code: zsh
$ kubectl describe configmap -n kube-system aws-auth
E0527 20:26:20.193891 52430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0527 20:26:21.096694 52430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0527 20:26:22.089118 52430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0527 20:26:23.203637 52430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0527 20:26:24.203720 52430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
error: You must be logged in to the server (the server has asked for the client to provide credentials)
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
You can specify an IAM role ARN with the --role-arn option to use for authentication when you issue kubectl commands. Otherwise, the IAM principal in your default AWS CLI or SDK credential chain is used.
code: cluster.ts
const adminRole = iam.Role.fromRoleArn(this, 'AdminRole',
'arn:aws:iam::xxx:role/Admin'
);
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_32,
mastersRole: adminRole,
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
});
code: zsh
$ aws eks update-kubeconfig --name helloeks5A23CE00-b676cc0afad94f50b2d3ac573169b36c --region eu-west-1 --role-arn arn:aws:iam::
$ kubectl describe configmap -n kube-system aws-auth
Name: aws-auth
Namespace: kube-system
Labels: aws.cdk.eks/prune-c86009bf911ab8ca079a6fa156b5be018bc0882f84=
Annotations: <none>
Data
====
mapAccounts:
----
[]
mapRoles:
----
[{"rolearn":"arn:aws:iam::xxx:role/Admin","username":"arn:aws:iam::xxx:role/Admin","groups":"system:masters"},{"rolearn":"arn:aws:iam::xxx:role/CdksampleStack-helloeksNodegroupDefaultCapacityNode-uBwgQZgrnzMU","username":"system:node:{{EC2PrivateDNSName}}","groups":"system:bootstrappers","system:nodes"}]
code: zsh
commit 93d4d4411ce90f5f6dc225cc838c3e32d8afd853
Author: wafuwafu13 <jaruwafu@gmail.com>
Date: Fri May 23 10:20:28 2025 +0100
fix(eks): ConfigCommand and GetTokenCommand are output by default
commit ce88c7616188183059640f9e16fedc2e256e0b40
Author: Kazuho Cryer-Shinozuka <malaysia.cryer@gmail.com>
Date: Fri May 23 11:24:52 2025 +0900
code: zsh
Error: ENOENT: no such file or directory, open 'cdk-integ.out.integ.job-submission-workflow.js.snapshot/manifest.json'
at Object.readFileSync (node:fs:449:20)
at Manifest.loadManifest (/workspaces/aws-cdk/node_modules/@aws-cdk/cloud-assembly-schema/lib/manifest.js:155:29)
at Manifest.loadAssemblyManifest (/workspaces/aws-cdk/node_modules/@aws-cdk/cloud-assembly-schema/lib/manifest.js:42:25)
at new CloudAssembly (/workspaces/aws-cdk/packages/aws-cdk-lib/cx-api/lib/cloud-assembly.js:49:45)
at CloudAssemblyBuilder.buildAssembly (/workspaces/aws-cdk/packages/aws-cdk-lib/cx-api/lib/cloud-assembly.js:303:16)
at synthesize (/workspaces/aws-cdk/packages/aws-cdk-lib/core/lib/private/synthesis.js:47:30)
at App.synth (/workspaces/aws-cdk/packages/aws-cdk-lib/core/lib/stage.js:120:58)
at Object.<anonymous> (/workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-stepfunctions-tasks/test/emrcontainers/integ.job-submission-workflow.js:94:5)
at Module._compile (node:internal/modules/cjs/loader:1469:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1548:10) {
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: 'cdk-integ.out.integ.job-submission-workflow.js.snapshot/manifest.json'
}
Node.js v20.18.3
ERROR /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-stepfunctions-tasks/test/emrcontainers/integ.job-submission-workflow.js (undefined/eu-west-1) 4.713s
Error during integration test: Error: Command exited with status 1
Test Results:
Tests: 1 failed, 1 total
code: zsh
Running in parallel across regions: eu-west-1
Running test /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-stepfunctions-tasks/test/emrcontainers/integ.job-submission-workflow.js in eu-west-1
ERROR /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-stepfunctions-tasks/test/emrcontainers/integ.job-submission-workflow.js (undefined/eu-west-1) 4.628s
Error during integration test: Error: ENOTEMPTY: directory not empty, rmdir 'test/aws-stepfunctions-tasks/test/emrcontainers/cdk-integ.out.integ.job-submission-workflow.js.snapshot'
Test Results:
Tests: 1 failed, 1 total
Error: Some integration tests failed!
at main (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/index.js:10256:15)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
code: zsh
❌ aws-stepfunctions-tasks-emr-containers-start-job-run failed: _ToolkitError: The stack named aws-stepfunctions-tasks-emr-containers-start-job-run failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Resource handler returned message: "Could not unzip uploaded file. Please check your file, then try to upload again. (Service: Lambda, Status Code: 400, Request ID: d906f15f-97d3-446b-9e52-47b0c1efc39e) (SDK Attempt Count: 1)" (RequestToken: b0e3f440-6607-101f-fe4a-357ec9eef41e, HandlerErrorCode: InvalidRequest)
FAILED aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run-aws-stepfunctions-tasks-emr-containers-start-job-run-integ/DefaultTest (undefined/eu-west-1) 182.714s
Integration test failed: Error: Command exited with status 1
Test Results:
Tests: 1 failed, 1 total
code: zsh
➜ test git:(eks-output-command) yarn integ aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run.js --update-on-failed
yarn run v1.22.22
$ integ-runner --language javascript aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run.js --update-on-failed
Verifying integration test snapshots...
CHANGED aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run 2.845s
Resources
~ Custom::AWS StartaJobRunGetEksClusterInfoD0E31373
└─ ~ Create
└─ ~ .Fn::Join:
└─ @@ -8,6 +8,6 @@
"Id"
]
},
- "\"},\"outputPaths\":\"virtualCluster.containerProvider.info.eksInfo.namespace\",\"virtualCluster.containerProvider.id\",\"physicalResourceId\":{\"id\":\"id\"},\"logApiResponseData\":true}"
+ "\"},\"outputPaths\":\"virtualCluster.containerProvider.info.eksInfo.namespace\",\"virtualCluster.containerProvider.id\",\"physicalResourceId\":{\"id\":\"id\"}}"
]
]
Snapshot Results:
Tests: 1 failed, 1 total
Failed: /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run.js
Running integration tests for failed tests...
Running in parallel across regions: us-east-1, us-east-2, us-west-2
Running test /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run.js in us-east-1
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
SUCCESS aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run-aws-stepfunctions-tasks-emr-containers-start-job-run-integ/DefaultTest 2915.421s
NO ASSERTIONS
Test Results:
Tests: 1 passed, 1 total
Could not determine git origin branch.
You need to manually checkout the snapshot directory test/aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run.js.snapshotfrom the merge-base (https://git-scm.com/docs/git-merge-base)
error: Error: Command exited with status 128
at exec2 (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/workers/extract/index.js:13039:11)
at IntegTestRunner.checkoutSnapshot (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/workers/extract/index.js:20530:26)
at IntegTestRunner.deploy (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/workers/extract/index.js:20849:18)
at IntegTestRunner.runIntegTestCase (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/workers/extract/index.js:20628:37)
at Function.integTestWorker (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/workers/extract/index.js:32167:34)
at MessagePort.<anonymous> (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/workers/extract/index.js:967:31)
at nodejs.internal.kHybridDispatch (node:internal/event_target:831:20)
at MessagePort.<anonymous> (node:internal/per_context/messageport:23:28)
Done in 2921.33s.
➜ test git:(eks-output-command) git status
Refresh index: 100% (20560/20560), done.
On branch eks-output-command
nothing to commit, working tree clean
➜ test git:(eks-output-command) git remote show origin
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
https://qiita.com/shizuma/items/2b2f873a0034839e47ce
code: zsh
Failed resources:
aws-stepfunctions-tasks-emr-containers-start-job-run | 12:52:08 PM | UPDATE_FAILED | AWS::IAM::Policy | SingletonLambda8693BB64968944B69AAFB0CC9EB8757C/ServiceRole/DefaultPolicy (SingletonLambda8693BB64968944B69AAFB0CC9EB8757CServiceRoleDefaultPolicy87B52EEA) CustomResource attribute error: Vendor response doesn't contain virtualCluster.containerProvider.id attribute in object arn:aws:cloudformation:eu-west-1:xxx:stack/aws-stepfunctions-tasks-emr-containers-start-job-run/14b7edd0-389b-11f0-8d95-0296e5db5705|StartaJobRunGetEksClusterInfoD0E31373|319ec1dc-18b2-4aaf-93ef-21c85b2b9846
❌ aws-stepfunctions-tasks-emr-containers-start-job-run failed: _ToolkitError: The stack named aws-stepfunctions-tasks-emr-containers-start-job-run failed to deploy: UPDATE_ROLLBACK_COMPLETE: CustomResource attribute error: Vendor response doesn't contain virtualCluster.containerProvider.id attribute in object arn:aws:cloudformation:eu-west-1:xxx:stack/aws-stepfunctions-tasks-emr-containers-start-job-run/14b7edd0-389b-11f0-8d95-0296e5db5705|StartaJobRunGetEksClusterInfoD0E31373|319ec1dc-18b2-4aaf-93ef-21c85b2b9846
FAILED aws-stepfunctions-tasks/test/emrcontainers/integ.start-job-run-aws-stepfunctions-tasks-emr-containers-start-job-run-integ/DefaultTest (undefined/eu-west-1) 3004.468s
Integration test failed: Error: Command exited with status 1
Test Results:
Tests: 1 failed, 1 total
Error: Some integration tests failed!
at main (/workspaces/aws-cdk/node_modules/@aws-cdk/integ-runner/lib/index.js:10256:15)
error Command failed with exit code 1.
code: zsh
$ git push origin eks-output-command
You are pushing to the remote origin at git@github.com:wafuwafu13/aws-cdk.git
Detected a sensitive ARN. Push will be blocked. Please refer to https://w.amazon.com/bin/view/AWS/Teams/GlobalServicesSecurity/Engineering/CodeDefender/UserHelp/#44 for help information.
* arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab 0c653d3a1d29324025a0ab451e72e4ca3f048ca3:packages/aws-cdk-lib/aws-stepfunctions-tasks/test/bedrock/create-model-customization-job.test.ts
issue
test(eks): remove eks.KubernetesVersion.V1_24 as default version in integration tests
code: zsh
The resource ClusterNodegroupBottlerocketNG299226DAB is in a CREATE_FAILED state
This AWS::EKS::Nodegroup resource is in a CREATE_FAILED state.
Resource handler returned message: "[Issue(Code=NodeCreationFailure, Message=Unhealthy nodes in the kubernetes cluster, ResourceIds=i-0dc2e15a39d3fb249, i-0a60d969fa0b43fdc)] (Service: null, Status Code: 0, Request ID: null)" (RequestToken: 3550b966-62c4-5a1e-2ba0-04e231622f59, HandlerErrorCode: GeneralServiceException)
code: zsh
Launching a new EC2 instance. Status Reason: Could not launch On-Demand Instances. Unsupported - Your requested instance type (a1.medium) is not supported in your requested Availability Zone (eu-west-1a). Please retry your request by not specifying an Availability Zone or choosing eu-west-1b, eu-west-1c. Launching EC2 instance failed.
https://scrapbox.io/files/680b33567105778b65509cf3.png
https://scrapbox.io/files/680b3359e0dd0a6c385a5092.png
https://scrapbox.io/files/680b3365d4a0429c09a13e92.png
https://scrapbox.io/files/680b33699abf21392ac48abe.png
https://scrapbox.io/files/680b336dec23c39c976a818c.png
code: ts
this.cluster.addNodegroupCapacity('BottlerocketNG2', {
amiType: NodegroupAmiType.BOTTLEROCKET_ARM_64,
instanceTypes: ec2.InstanceType.of(ec2.InstanceClass.C6G, ec2.InstanceSize.LARGE),
});
https://scrapbox.io/files/680ca791ba6a0c284b81eafb.png
code: zsh
git checkout main
git fetch upstream
git merge upstream/main
git checkout integ-update-ekscluster
git merge upstream/main
Automatic merge failed; fix conflicts and then commit the result.
https://scrapbox.io/files/68152454cf24a40bfe635b22.png
.json -> Accept Current Change -> +
.zip -> +
https://scrapbox.io/files/680ca775a591cadfb230d8e6.png
code: zsh
$ git commit -m "Merge remote-tracking branch 'upstream/main' into integ-update-ekscluster"
Code Defender has found a Private RSA Key:
packages/@aws-cdk-testing/framework-integ/test/aws-lambda-event-sources/test/integ.s3-onfailuire-destination.js.snapshot/lambda-event-source-s3ofd.template.json:182: "SecretString": "{\"certificate\":\"-----BEGIN CERTIFICATE-----\\n MIIE5DCCAsygAwIBAgIRAPJdwaFaNRrytHBto0j5BA0wDQYJKoZIhvcNAQELBQAw\\n cmUuiAii9R0=\\n -----END CERTIFICATE-----\\n -----BEGIN CERTIFICATE-----\\n MIIFgjCCA2qgAwIBAgIQdjNZd6uFf9hbNC5RdfmHrzANBgkqhkiG9w0BAQsFADBb\\n c8PH3PSoAaRwMMgOSA2ALJvbRz8mpg==\\n -----END CERTIFICATE-----\\\"\\n \",\"privateKey\":\"-----BEGIN ENCRYPTED PRIVATE KEY-----\\n zp2mwJn2NYB7AZ7+imp0azDZb+8YG2aUCiyqb6PnnA==\\n -----END ENCRYPTED PRIVATE KEY-----\"}"
If the secret found is allowed, check our FAQ for the list of approved secrets: https://w.amazon.com/bin/view/AWS/Teams/GlobalServicesSecurity/Engineering/CodeDefender/UserHelp/#59
Possible mitigations:
Mark false positives as allowed using: git config --add secrets.allowed ...
Mark false positives as allowed by adding regular expressions to .gitallowed at repository's root directory
$ git config --add secrets.allowed 'BEGIN ENCRYPTED PRIVATE KEY'
$ git config --add secrets.allowed 'BEGIN CERTIFICATE'
$ git commit -m "Merge remote-tracking branch 'upstream/main' into integ-update-ekscluster"
fatal: could not open '.git/MERGE_HEAD' for reading: No such file or directory
$ git status
On branch integ-update-ekscluster
nothing to commit, working tree clean
$ git push origin integ-update-ekscluster
issue
fix(eks): update aws-node-termination-handler chart version
https://github.com/aws/aws-cdk/blob/1c0e03f0b1d34eae3b4c519a2524348b42d24e90/packages/%40aws-cdk-testing/framework-integ/test/aws-eks/test/integ.eks-inference.js.snapshot/aws-cdk-eks-cluster-inference.template.json#L407
https://github.com/aws/aws-cdk/blob/main/packages/%40aws-cdk-testing/framework-integ/test/aws-eks/test/integ-tests-kubernetes-version.ts
getClusterVersionConfig
code: zsh
test git:(eks-output-config) yarn integ aws-eks/test/integ.eks-inference.js --update-on-failed --parallel-regions eu-west-1
aws-cdk-eks-cluster-inference | 3/81 | 1:07:08 PM | ROLLBACK_COMPLETE | AWS::CloudFormation::Stack | aws-cdk-eks-cluster-inference
Failed resources:
aws-cdk-eks-cluster-inference | 1:05:44 PM | CREATE_FAILED | Custom::AWSCDK-EKS-Cluster | Cluster/Resource/Resource/Default (Cluster9EE0221C) Received response status FAILED from custom resource. Message returned: unsupported Kubernetes version 1.24
https://github.com/aws/aws-node-termination-handler/blob/b1181bfafebeb389a55dba63a43e5cc167e9f014/config/helm/aws-node-termination-handler/Chart.yaml#L5
https://artifacthub.io/packages/helm/aws-node-termination-handler/aws-node-termination-handler
https://github.com/aws/aws-node-termination-handler/pull/758
code: zsh
Error occurred while monitoring stack: SignatureDoesNotMatch: Signature expired: 20250421T205708Z is now earlier than 20250421T210333Z (20250421T210833Z - 5 min.)
at throwDefaultError (/workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:75605:24)
at /workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:75614:9
at de_CommandError (/workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:117439:18)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:69783:24
at async /workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:69904:22
at async /workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:80123:42
at async /workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:69520:26
at async _StackEventPoller.doPoll (/workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:126340:26)
at async _StackEventPoller.poll (/workspaces/aws-cdk/node_modules/aws-cdk/lib/index.js:126322:24) {
'$fault': 'client',
'$metadata': Object,
Type: 'Sender',
Code: 'SignatureDoesNotMatch'
}
❌ aws-cdk-eks-cluster-ipv6-test failed: SignatureDoesNotMatch: Signature expired: 20250421T205708Z is now earlier than 20250421T210333Z (20250421T210833Z - 5 min.)
code: zsh
chart "aws-node-termination-handler" version "0.27.0" not found in https://aws.github.io/eks-charts repository\n'
issue
fix(eks): integ test faild with InvalidParameterException
code: zsh
Received response status FAILED from custom resource. Message returned: AccessConfig AuthMode must be API_AND_CONFIG_MAP or API when remoteNetworkConfig is specified Logs: /aws/lambda/aws-cdk-eks-cluster-hybrid--OnEventHandler42BEBAE0-5TBqAh9CQQaJ at de_InvalidParameterExceptionRes (/var/runtime/node_modules/@aws-sdk/client-eks/dist-cjs/index.js:2826:21) at de_CommandError (/var/runtime/node_modules/@aws-sdk/client-eks/dist-cjs/index.js:2724:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async /var/runtime/node_modules/@aws-sdk/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20 at async /var/runtime/node_modules/@aws-sdk/node_modules/@smithy/core/dist-cjs/index.js:167:18 at async /var/runtime/node_modules/@aws-sdk/node_modules/@smithy/middleware-retry/dist-cjs/index.js:321:38 at async /var/runtime/node_modules/@aws-sdk/middleware-logger/dist-cjs/index.js:33:22 at async aB.onCreate (/var/task/index.js:51:649356) (RequestId: 1a7fdf37-7df1-4f2e-bad6-10e075cb860a)
code: zsh
$ yarn integ aws-eks/test/integ.eks-hybrid-nodes.js --update-on-failed --parallel-regions eu-west-1
...
❌ aws-cdk-eks-cluster-hybrid-nodes failed: _ToolkitError: The stack named aws-cdk-eks-cluster-hybrid-nodes failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Received response status FAILED from custom resource. Message returned: AccessConfig AuthMode must be API_AND_CONFIG_MAP or API when remoteNetworkConfig is specified
code: zsh
❌ aws-cdk-eks-cluster-hybrid-nodes failed: _ToolkitError: The stack named aws-cdk-eks-cluster-hybrid-nodes failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Received response status FAILED from custom resource. Message returned: Invalid remote node network: CIDR 10.0.0.0/16 overlaps with VPC CIDR 10.0.0.0/16
issue
(EKS): outputConfigCommand does not work any more without mastersRole
https://github.com/aws/aws-cdk/pull/33673
code:txt
aws-cdk-lib: PASS aws-events/test/connection.test.ts
@aws-cdk/aws-eks-v2-alpha: UNCHANGED integ.eks-cluster 5.698s
@aws-cdk/aws-eks-v2-alpha: Snapshot Results:
@aws-cdk/aws-eks-v2-alpha: Tests: 6 failed, 14 total
@aws-cdk/aws-eks-v2-alpha: Failed: /codebuild/output/src2051224748/src/github.com/aws/aws-cdk/packages/@aws-cdk/aws-eks-v2-alpha/test/integ.alb-controller.js
@aws-cdk/aws-eks-v2-alpha: Failed: /codebuild/output/src2051224748/src/github.com/aws/aws-cdk/packages/@aws-cdk/aws-eks-v2-alpha/test/integ.eks-addon.js
@aws-cdk/aws-eks-v2-alpha: Failed: /codebuild/output/src2051224748/src/github.com/aws/aws-cdk/packages/@aws-cdk/aws-eks-v2-alpha/test/integ.eks-inference-nodegroup.js
@aws-cdk/aws-eks-v2-alpha: Failed: /codebuild/output/src2051224748/src/github.com/aws/aws-cdk/packages/@aws-cdk/aws-eks-v2-alpha/test/integ.eks-standard-access-entry.js
@aws-cdk/aws-eks-v2-alpha: Failed: /codebuild/output/src2051224748/src/github.com/aws/aws-cdk/packages/@aws-cdk/aws-eks-v2-alpha/test/integ.eks-subnet-updates.js
@aws-cdk/aws-eks-v2-alpha: Failed: /codebuild/output/src2051224748/src/github.com/aws/aws-cdk/packages/@aws-cdk/aws-eks-v2-alpha/test/integ.fargate-cluster.js
@aws-cdk/aws-eks-v2-alpha: Error: Some tests failed!
@aws-cdk/aws-eks-v2-alpha: To re-run failed tests run: integ-runner --update-on-failed
@aws-cdk/aws-eks-v2-alpha: at main (/codebuild/output/src2051224748/src/github.com/aws/aws-cdk/packages/@aws-cdk/integ-runner/lib/cli.js:192:19)
aws-cdk-lib: PASS aws-stepfunctions-tasks/test/emr/emr-terminate-cluster.test.ts
@aws-cdk/aws-eks-v2-alpha: Error: integ-runner exited with error code 1
@aws-cdk/aws-eks-v2-alpha: Tests failed. Total time (1m8.6s) | /codebuild/output/src2051224748/src/github.com/aws/aws-cdk/node_modules/jest/bin/jest.js (1m1.8s) | integ-runner (6.6s)
@aws-cdk/aws-eks-v2-alpha: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
code: zsh
Failed resources:
integ-eks-stack | 11:59:59 | CREATE_FAILED | AWS::Lambda::LayerVersion | KubectlLayer (KubectlLayer600207B5) Resource handler returned message: "Could not unzip uploaded file. Please check your file, then try to upload again. (Service: Lambda, Status Code: 400, Request ID: ed40f0ef-f659-4bf1-b57a-d62d805817ad) (SDK Attempt Count: 1)" (RequestToken: cf4301f6-f535-2e3b-42fa-fd5042598e9b, HandlerErrorCode: InvalidRequest)
!!DESTRUCTIVE_CHANGES: WILL_REPLACE
cdk-hnb659fds-assets- 削除
CDKToolkit 削除
https://github.com/aws/aws-cdk/issues/19695
code: zsh
$ unzip 9953ad4c3e84d120643ece4b2e51caf43fd9850063641b4d78bf30fbe6b4d381.zip
Archive: 9953ad4c3e84d120643ece4b2e51caf43fd9850063641b4d78bf30fbe6b4d381.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of 9953ad4c3e84d120643ece4b2e51caf43fd9850063641b4d78bf30fbe6b4d381.zip or
9953ad4c3e84d120643ece4b2e51caf43fd9850063641b4d78bf30fbe6b4d381.zip.zip, and cannot find 9953ad4c3e84d120643ece4b2e51caf43fd9850063641b4d78bf30fbe6b4d381.zip.ZIP, period.
https://github.com/aws/aws-cdk/issues/5636#issuecomment-681039022
We resolved the issue, the S3 bucket had corrupt files. We cleared out the entire S3 bucket of arn:aws:s3:::cdk-hiudfsaf-assets-34723248234-us-west-2 (replaced IDs in S3 folder name) and now deployments are working again.
cdk-hnb659fds-assets- 空にする
.snapshot フォルダ削除
code: zsh
❌ aws-cdk-eks-cluster-hybrid-nodes failed: _ToolkitError: The stack named aws-cdk-eks-cluster-hybrid-nodes failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Received response status FAILED from custom resource. Message returned: AccessConfig AuthMode must be API_AND_CONFIG_MAP or API when remoteNetworkConfig is specified
https://github.com/aws/aws-cdk/blob/main/packages/%40aws-cdk-testing/framework-integ/test/aws-eks/test/integ.eks-hybrid-nodes.js.snapshot/aws-cdk-eks-cluster-hybrid-nodes.template.json
issue
EC2 fails when importing cdk
CDK import を試す
S3 作成
https://scrapbox.io/files/671fe8a9c94e3a0e74701920.png
code: ts
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as s3 from 'aws-cdk-lib/aws-s3';
export class ImportdemoStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
new s3.Bucket(this, 'ImportBucket', {
bucketName: 'cdkimport1028'
})
}
}
code: zsh
$ cdk import
ImportdemoStack
ImportdemoStack/ImportBucket/Resource (AWS::S3::Bucket): import with BucketName=cdkimport1028 (yes/no) default: yes? yes
ImportdemoStack: importing resources into stack...
ImportdemoStack: creating CloudFormation changeset...
✅ ImportdemoStack
Import operation complete. We recommend you run a drift detection operation to confirm your CDK app resource definitions are up-to-date. Read more here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/detect-drift-stack.html
https://scrapbox.io/files/671fe95cebd8e0762180c730.png
EC2 作成
code: ts
const vpc = ec2.Vpc.fromLookup(this, 'ImportedVpc', {
vpcId: 'vpc-02ff5317a40930842'
});
new ec2.Instance(this, 'MyInstance', {
vpc,
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO),
machineImage:new ec2.AmazonLinuxImage({ generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2 }),
});
code: zsh
$ cdk import
ImportdemoStack
ImportdemoStack/MyInstance/InstanceSecurityGroup/Resource (AWS::EC2::SecurityGroup): enter Id (empty to skip): sg-0433c2ca0d13a5d34
ImportdemoStack/MyInstance/InstanceRole/Resource (AWS::IAM::Role): enter RoleName (empty to skip):
Skipping import of ImportdemoStack/MyInstance/InstanceRole/Resource
ImportdemoStack/MyInstance/InstanceProfile (AWS::IAM::InstanceProfile): enter InstanceProfileName (empty to skip):
Skipping import of ImportdemoStack/MyInstance/InstanceProfile
ImportdemoStack/MyInstance/Resource (AWS::EC2::Instance): enter InstanceId (empty to skip): i-0b3de55fb96d30af4
ImportdemoStack: importing resources into stack...
ImportdemoStack: creating CloudFormation changeset...
❌ ImportdemoStack failed: Error ValidationError: Template format error: Unresolved resource dependencies SsmParameterValueawsserviceamiamazonlinuxlatestamzn2amihvmx8664gp2C96584B6F00A464EAD1953AFF4B05118Parameter, MyInstanceInstanceRole1C4D4747, MyInstanceInstanceProfile2784C631 in the Resources block of the template
at Request.extractError (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:46717)
at Request.callListeners (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:91771)
at Request.emit (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:91219)
at Request.emit (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:199820)
at Request.transition (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:193373)
at AcceptorStateMachine.runTo (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:158245)
at /Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:158575
at Request.<anonymous> (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:193665)
at Request.<anonymous> (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:199895)
at Request.callListeners (/Users/herotaka/.nvm/versions/node/v22.7.0/lib/node_modules/aws-cdk/lib/index.js:401:91939) {
code: 'ValidationError',
time: 2024-10-28T20:00:07.345Z,
requestId: '9ae0d10f-3e7d-4130-9915-2490e349c161',
statusCode: 400,
retryable: false,
retryDelay: 291.9043374966537
}
Template format error: Unresolved resource dependencies SsmParameterValueawsserviceamiamazonlinuxlatestamzn2amihvmx8664gp2C96584B6F00A464EAD1953AFF4B05118Parameter, MyInstanceInstanceRole1C4D4747, MyInstanceInstanceProfile2784C631 in the Resources block of the template
code: zsh
$ cdk synth
...
Parameters:
SsmParameterValueawsserviceamiamazonlinuxlatestamzn2amihvmx8664gp2C96584B6F00A464EAD1953AFF4B05118Parameter:
Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>
Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
code: json
,
"CDKMetadata": {
"Type": "AWS::CDK::Metadata",
"Properties": {
"Analytics": "v2:deflate64:H4sIAAAAAAAA/2WLzQ6CMBCEn4V7WbGGeNeD8UbgAcxalqT8tGbbSkzTdzeAnDzN5PtmJBzLEooMZ5erdshH/YTYeFSDwNk9ojtBvAQ1kBfXzmwtCVIS4t04j0aRaEgF1v5zYxtey+wP7NMkNE4QazvSgvfcdcW20yOlJGpyNrBabYWME3ni9fITSRjbEvTu8JYSzlBkvdM652C8ngjqLb8BJtb43gAAAA=="
},
"Metadata": {
"aws:cdk:path": "ImportdemoStack/CDKMetadata/Default"
}
}
issue
code: packages/aws-cdk-lib/aws-ecs-patterns/lib/fargate/network-load-balanced-fargate-service.ts
/**
* The security groups to associate with the service. If you do not specify a security group, a new security group is created.
*
* @default - A new security group is created.
*/
readonly securityGroups?: ISecurityGroup[];
If you do not specify a security group, a new security group is created. <- 本当に?
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js.snapshot/aws-ecs-integ-lb-fargate.template.json
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer"
-> NLB には sg ない(修正前)
issue
aws-eks: neuron device plugin manifest better reference
(eks): missing access to Kubernetes objects on EKS cluster creation
https://docs.aws.amazon.com/cdk/api/v2//docs/aws-cdk-lib.aws_eks-readme.html#masters-role
Admin ロールの信頼ポリシーに追加
issue
code: packages/@aws-cdk-testing/framework-integ/test/aws-route53-patterns/test/integ.hosted-redirect-same-region.ts
const hostedZoneId = process.env.CDK_INTEG_HOSTED_ZONE_ID ?? process.env.HOSTED_ZONE_ID;
if (!hostedZoneId) throw new Error('For this test you must provide your own HostedZoneId as an env var "HOSTED_ZONE_ID". See framework-integ/README.md for details.');
const hostedZoneName = process.env.CDK_INTEG_HOSTED_ZONE_NAME ?? process.env.HOSTED_ZONE_NAME;
if (!hostedZoneName) throw new Error('For this test you must provide your own HostedZoneName as an env var "HOSTED_ZONE_NAME". See framework-integ/README.md for details.');
const domainName = process.env.CDK_INTEG_DOMAIN_NAME ?? process.env.DOMAIN_NAME;
if (!domainName) throw new Error('For this test you must provide your own DomainName as an env var "DOMAIN_NAME". See framework-integ/README.md for details.');
https://github.com/aws/aws-cdk/tree/main/packages/%40aws-cdk-testing/framework-integ
code: zsh
ec2-user@ip-172-31-0-239 ~$ sudo yum install -y httpd
ec2-user@ip-172-31-0-239 ~$ sudo systemctl start httpd
ec2-user@ip-172-31-0-239 html$ sudo vi index.html
$ curl 18.201.145.54
<h1>Hello World</h1>
$ curl dub28-928315127.eu-west-1.elb.amazonaws.com
<h1>Hello World</h1>
$ curl (alias).hjk.jp
<h1>Hello World</h1>
$ curl https://(alias).hjk.jp
<h1>Hello World</h1>
code: packages/aws-cdk-lib/aws-ecs-patterns/lib/base/application-load-balanced-service-base.ts
if (protocol === ApplicationProtocol.HTTPS) {
if (props.certificate !== undefined) {
this.certificate = props.certificate;
} else {
if (typeof props.domainName === 'undefined' || typeof props.domainZone === 'undefined') {
throw new Error('A domain name and zone is required when using the HTTPS protocol');
}
this.certificate = new Certificate(this, 'Certificate', {
domainName: props.domainName,
validation: CertificateValidation.fromDns(props.domainZone),
});
}
}
code:ts
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as eks from 'aws-cdk-lib/aws-eks';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
export class CdkPlaygroundStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new ec2.Vpc(this, 'VPC', {
maxAzs: 3
});
const cluster = new eks.Cluster(this, 'EKSCluster', {
vpc,
version: eks.KubernetesVersion.V1_29,
defaultCapacity: 0
});
cluster.addNodegroupCapacity('Inf2NodeGroup', {
instanceTypes: new ec2.InstanceType('inf2.xlarge'),
minSize: 1,
});
}
}
code: zsh
AWS_DEFAULT_REGION=eu-west-2 cdk deploy
code: zsh
[Tried to create resource record set name='(alias).hjk.jp.', type='A' but it already exists]
code: zsh
➜ test git:(integ-fargate-domain) yarn integ --disable-update-workflow aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https.js d s 2
yarn run v1.22.19
$ integ-runner --language javascript --disable-update-workflow aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https.js
Verifying integration test snapshots...
CHANGED aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https 2.81s
Resources
~ AWS::CertificateManager::Certificate myServiceCertificate152F9DDA replace
├─ ~ DomainName (requires replacement)
│ ├─ - test.example.com
│ └─ + *.example.com
└─ ~ DomainValidationOptions (requires replacement)
└─ @@ -1,6 +1,6 @@
[
{
- "DomainName": "test.example.com",
- "HostedZoneId": "fakeId"
+ "DomainName": "*.example.com",
+ "HostedZoneId": "Z23ABC4XYZL05B"
}
]
~ AWS::Route53::RecordSet myServiceDNSD76FB53A replace
├─ ~ HostedZoneId (requires replacement)
│ ├─ - fakeId
│ └─ + Z23ABC4XYZL05B
└─ ~ Name (requires replacement)
├─ - test.example.com.
└─ + *.example.com.
Snapshot Results:
Tests: 1 failed, 1 total
Failed: /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https.js
!!! This test contains destructive changes !!!
Stack: aws-ecs-integ-alb-fg-https - Resource: myServiceCertificate152F9DDA - Impact: WILL_REPLACE
Stack: aws-ecs-integ-alb-fg-https - Resource: myServiceDNSD76FB53A - Impact: WILL_REPLACE
!!! If these destructive changes are necessary, please indicate this on the PR !!!
Error: Some changes were destructive!
at main (/workspaces/aws-cdk/packages/@aws-cdk/integ-runner/lib/cli.js:183:15)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
➜ test git:(integ-fargate-domain) yarn integ --disable-update-workflow aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https.js --update-on-failed --parallel-regions eu-west-2
yarn run v1.22.19
$ integ-runner --language javascript --disable-update-workflow aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https.js --update-on-failed --parallel-regions eu-west-2
Verifying integration test snapshots...
CHANGED aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https 2.28s
Resources
~ AWS::CertificateManager::Certificate myServiceCertificate152F9DDA replace
├─ ~ DomainName (requires replacement)
│ ├─ - test.example.com
│ └─ + *.example.com
└─ ~ DomainValidationOptions (requires replacement)
└─ @@ -1,6 +1,6 @@
[
{
- "DomainName": "test.example.com",
- "HostedZoneId": "fakeId"
+ "DomainName": "*.example.com",
+ "HostedZoneId": "Z23ABC4XYZL05B"
}
]
~ AWS::Route53::RecordSet myServiceDNSD76FB53A replace
├─ ~ HostedZoneId (requires replacement)
│ ├─ - fakeId
│ └─ + Z23ABC4XYZL05B
└─ ~ Name (requires replacement)
├─ - test.example.com.
└─ + *.example.com.
Snapshot Results:
Tests: 1 failed, 1 total
Failed: /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https.js
!!! This test contains destructive changes !!!
Stack: aws-ecs-integ-alb-fg-https - Resource: myServiceCertificate152F9DDA - Impact: WILL_REPLACE
Stack: aws-ecs-integ-alb-fg-https - Resource: myServiceDNSD76FB53A - Impact: WILL_REPLACE
!!! If these destructive changes are necessary, please indicate this on the PR !!!
Running integration tests for failed tests...
Running in parallel across regions: eu-west-2
Running test /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.alb-fargate-service-https.js in eu-west-2
issue
https://github.com/aws/aws-cdk/blob/a75f447d6dc9ad8b1b00a7faebdd8aadc3d25e28/packages/aws-cdk-lib/aws-ecs-patterns/README.md
整理
issue
ecs-patterns: enable to specify securityGroups in NetworkLoadBalancedFargateService
issue
feat(ecs-patterns): support dualstack ALB in both ec2 and fargate
(network-load-balanced-fargate-service): (Add support for creating load balancer with IPv6 support)
code: zsh
$ aws elbv2 describe-load-balancers --names aws-ec-ALBFa-upovB6px7ex9 --region eu-west-2
{
"LoadBalancers": [
{
"LoadBalancerArn": "arn:aws:elasticloadbalancing:eu-west-2:xxx:loadbalancer/app/aws-ec-ALBFa-upovB6px7ex9/1b0d62273ea4180c",
"DNSName": "aws-ec-ALBFa-upovB6px7ex9-1054150935.eu-west-2.elb.amazonaws.com",
"CanonicalHostedZoneId": "ZHURV8PSTC4K8",
"CreatedTime": "2024-03-01T16:51:30.470000+00:00",
"LoadBalancerName": "aws-ec-ALBFa-upovB6px7ex9",
"Scheme": "internet-facing",
"VpcId": "vpc-0713b58c6765f7d87",
"State": {
"Code": "active"
},
"Type": "application",
"AvailabilityZones": [
{
"ZoneName": "eu-west-2b",
"SubnetId": "subnet-05dcf249d06336e39",
"LoadBalancerAddresses": []
},
{
"ZoneName": "eu-west-2a",
"SubnetId": "subnet-0cf0fcd94dfd4bd2f",
"LoadBalancerAddresses": []
}
],
"SecurityGroups": [
"sg-0fdff890b0876eabf"
],
"IpAddressType": "ipv4"
}
]
}
$ aws elbv2 describe-load-balancers --names aws-ec-NLBFa-D5tqq6UBeN9u --region eu-west-2
{
"LoadBalancers": [
{
"LoadBalancerArn": "arn:aws:elasticloadbalancing:eu-west-2:xxx:loadbalancer/net/aws-ec-NLBFa-D5tqq6UBeN9u/237e00ce858451b0",
"DNSName": "aws-ec-NLBFa-D5tqq6UBeN9u-237e00ce858451b0.elb.eu-west-2.amazonaws.com",
"CanonicalHostedZoneId": "ZD4D7Y8KGAS4G",
"CreatedTime": "2024-03-01T16:51:30.375000+00:00",
"LoadBalancerName": "aws-ec-NLBFa-D5tqq6UBeN9u",
"Scheme": "internet-facing",
"VpcId": "vpc-0713b58c6765f7d87",
"State": {
"Code": "active"
},
"Type": "network",
"AvailabilityZones": [
{
"ZoneName": "eu-west-2b",
"SubnetId": "subnet-05dcf249d06336e39",
"LoadBalancerAddresses": []
},
{
"ZoneName": "eu-west-2a",
"SubnetId": "subnet-0cf0fcd94dfd4bd2f",
"LoadBalancerAddresses": []
}
],
"IpAddressType": "ipv4"
}
]
}
NLB にも両方必要
code: zsh
ALB
{
"SecurityGroups": [
{
"Description": "Automatically created Security Group for ELB awsecsinteglbfargateALBFargateServiceLBF93E98F2",
"GroupName": "aws-ecs-integ-lb-fargate-ALBFargateServiceLBSecurityGroup5DC3060E-1ROVY50YRPPO8",
"IpPermissions": [
{
"FromPort": 80,
"IpProtocol": "tcp",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0",
"Description": "Allow from anyone on port 80"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 80,
"UserIdGroupPairs": []
}
],
"OwnerId": "xxx",
"GroupId": "sg-0fdff890b0876eabf",
"IpPermissionsEgress": [
{
"FromPort": 80,
"IpProtocol": "tcp",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 80,
"UserIdGroupPairs": [
{
"Description": "Load balancer to target",
"GroupId": "sg-067f244106a7d62c8",
"UserId": "xxx"
}
]
}
],
"Tags": [
{
"Key": "aws:cloudformation:stack-id",
"Value": "arn:aws:cloudformation:eu-west-2:xxx:stack/aws-ecs-integ-lb-fargate/dc5d1c00-d7eb-11ee-aee7-02887ba073dd"
},
{
"Key": "aws:cloudformation:logical-id",
"Value": "ALBFargateServiceLBSecurityGroup5DC3060E"
},
{
"Key": "aws:cloudformation:stack-name",
"Value": "aws-ecs-integ-lb-fargate"
}
],
"VpcId": "vpc-0713b58c6765f7d87"
}
]
}
code: zsh
Service
{
"SecurityGroups": [
{
"Description": "aws-ecs-integ-lb-fargate/ALBFargateService/Service/SecurityGroup",
"GroupName": "aws-ecs-integ-lb-fargate-ALBFargateServiceSecurityGroup82F7A67E-LFTPDIEUO2KP",
"IpPermissions": [
{
"FromPort": 80,
"IpProtocol": "tcp",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 80,
"UserIdGroupPairs": [
{
"Description": "Load balancer to target",
"GroupId": "sg-0235aaf2f40b6789d",
"UserId": "xxx"
}
]
}
],
"OwnerId": "xxx",
"GroupId": "sg-0f44ab61bdb7cd805",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0",
"Description": "Allow all outbound traffic by default"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"Tags": [
{
"Key": "aws:cloudformation:stack-name",
"Value": "aws-ecs-integ-lb-fargate"
},
{
"Key": "aws:cloudformation:stack-id",
"Value": "arn:aws:cloudformation:eu-west-3:xxx:stack/aws-ecs-integ-lb-fargate/b047c240-d7fb-11ee-9c12-0ed8ce3f7ff3"
},
{
"Key": "aws:cloudformation:logical-id",
"Value": "ALBFargateServiceSecurityGroup82F7A67E"
}
],
"VpcId": "vpc-0f52b6c8d3e3628d6"
}
]
}
code: packages/aws-cdk-lib/aws-elasticloadbalancingv2/lib/nlb/network-load-balancer.ts
/**
* After the implementation of IConnectable (see https://github.com/aws/aws-cdk/pull/28494), the default
* value for securityGroups is set by the ec2.Connections constructor to an empty array.
* To keep backward compatibility (securityGroups is undefined if the related property is not specified)
* a getter has been added.
*/
public get securityGroups(): string[] | undefined {
return this.isSecurityGroupsPropertyDefined || this.connections.securityGroups.length
? this.connections.securityGroups.map(sg => sg.securityGroupId)
: undefined;
}
指定できない!!
code:ts
// Create NLB service
const networkLoadBalancedFargateService = new ecsPatterns.NetworkLoadBalancedFargateService(stack, 'NLBFargateService', {
cluster,
memoryLimitMiB: 1024,
cpu: 512,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('amazon/amazon-ecs-sample'),
},
secu..
});
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js.snapshot/aws-ecs-integ-lb-fargate.template.json
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer"
-> NLB には sg ない
code: zsh
➜ test git:(integ-fargate-domain) yarn integ aws-ecs-patterns/test/fargate/integ.l3.js --disable-update-workflow --update-on-failed --parallel-regions eu-west-2
yarn run v1.22.19
$ integ-runner --language javascript aws-ecs-patterns/test/fargate/integ.l3.js --disable-update-workflow --update-on-failed --parallel-regions eu-west-2
Verifying integration test snapshots...
CHANGED aws-ecs-patterns/test/fargate/integ.l3 2.329s
Security Group Changes
┌───┬──────────────────────────────────┬─────┬─────────────┬────────────────────┐
│ │ Group │ Dir │ Protocol │ Peer │
├───┼──────────────────────────────────┼─────┼─────────────┼────────────────────┤
│ + │ ${SecurityGroupDD263621.GroupId} │ In │ TCP 80 │ Everyone (IPv4) │
│ + │ ${SecurityGroupDD263621.GroupId} │ Out │ ICMP 252-86 │ 255.255.255.255/32 │
└───┴──────────────────────────────────┴─────┴─────────────┴────────────────────┘
(NOTE: There may be security-related changes not in this list. See https://github.com/aws/aws-cdk/issues/1299)
Resources
- AWS::EC2::SecurityGroup NLBFargateServiceSecurityGroup9D81388B destroy
+ AWS::EC2::SecurityGroup SecurityGroupDD263621
~ AWS::ECS::Service NLBFargateServiceB92AC095
└─ ~ NetworkConfiguration
└─ ~ .AwsvpcConfiguration:
└─ ~ .SecurityGroups:
└─ @@ -1,7 +1,7 @@
[
{
"Fn::GetAtt": [
- "NLBFargateServiceSecurityGroup9D81388B",
+ "SecurityGroupDD263621",
"GroupId"
]
}
Snapshot Results:
Tests: 1 failed, 1 total
Failed: /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js
!!! This test contains destructive changes !!!
Stack: aws-ecs-integ-lb-fargate - Resource: NLBFargateServiceSecurityGroup9D81388B - Impact: WILL_DESTROY
!!! If these destructive changes are necessary, please indicate this on the PR !!!
Running integration tests for failed tests...
Running in parallel across regions: eu-west-2
Running test /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js in eu-west-2
-> SG 設定されてない
TODO:snapshotファイル消してみる
code: zsh
➜ test git:(integ-fargate-domain) yarn integ aws-ecs-patterns/test/fargate/integ.l3.js --disable-update-workflow --update-on-failed --parallel-regions eu-west-2
yarn run v1.22.19
$ integ-runner --language javascript aws-ecs-patterns/test/fargate/integ.l3.js --disable-update-workflow --update-on-failed --parallel-regions eu-west-2
Verifying integration test snapshots...
NEW aws-ecs-patterns/test/fargate/integ.l3 4.029s
Snapshot Results:
Tests: 1 failed, 1 total
Failed: /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js
Running integration tests for failed tests...
Running in parallel across regions: eu-west-2
Running test /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js in eu-west-2
-> SG 設定されてない
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.ts
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
var ec2 = require("aws-cdk-lib/aws-ec2");
var ecs = require("aws-cdk-lib/aws-ecs");
var cdk = require("aws-cdk-lib");
var integ = require("@aws-cdk/integ-tests-alpha");
var ecsPatterns = require("aws-cdk-lib/aws-ecs-patterns");
var app = new cdk.App();
var stack = new cdk.Stack(app, 'aws-ecs-integ-lb-fargate');
// Create VPC and cluster
var vpc = new ec2.Vpc(stack, 'Vpc', { maxAzs: 2, restrictDefaultSecurityGroup: false });
var cluster = new ecs.Cluster(stack, 'FargateCluster', { vpc: vpc });
var securityGroup = new ec2.SecurityGroup(stack, 'SecurityGroup', {
vpc: vpc,
allowAllOutbound: true,
});
securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(80));
// Create ALB service
var applicationLoadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(stack, 'ALBFargateService', {
cluster: cluster,
memoryLimitMiB: 1024,
cpu: 512,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('amazon/amazon-ecs-sample'),
},
});
applicationLoadBalancedFargateService.loadBalancer.connections.addSecurityGroup(securityGroup);
// Create NLB service
var networkLoadBalancedFargateService = new ecsPatterns.NetworkLoadBalancedFargateService(stack, 'NLBFargateService', {
cluster: cluster,
memoryLimitMiB: 1024,
cpu: 512,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('amazon/amazon-ecs-sample'),
},
securityGroups: securityGroup,
});
networkLoadBalancedFargateService.loadBalancer.connections.addSecurityGroup(securityGroup);
new integ.IntegTest(app, 'l3FargateTest', {
testCases: stack,
});
app.synth();
code: zsh
➜ test git:(integ-fargate-domain) yarn integ aws-ecs-patterns/test/fargate/integ.l3.js --disable-update-workflow --update-on-failed --parallel-regions eu-west-2
yarn run v1.22.19
$ integ-runner --language javascript aws-ecs-patterns/test/fargate/integ.l3.js --disable-update-workflow --update-on-failed --parallel-regions eu-west-2
Verifying integration test snapshots...
CHANGED aws-ecs-patterns/test/fargate/integ.l3 3.749s
Security Group Changes
┌───┬───────────────────────────────────────────────────┬─────┬─────────────┬──────────────────────────────────┐
│ │ Group │ Dir │ Protocol │ Peer │
├───┼───────────────────────────────────────────────────┼─────┼─────────────┼──────────────────────────────────┤
│ - │ ${SecurityGroupDD263621.GroupId} │ Out │ ICMP 252-86 │ 255.255.255.255/32 │
├───┼───────────────────────────────────────────────────┼─────┼─────────────┼──────────────────────────────────┤
│ + │ ${ALBFargateServiceSecurityGroup82F7A67E.GroupId} │ In │ TCP 80 │ ${SecurityGroupDD263621.GroupId} │
├───┼───────────────────────────────────────────────────┼─────┼─────────────┼──────────────────────────────────┤
│ + │ ${SecurityGroupDD263621.GroupId} │ Out │ Everything │ Everyone (IPv4) │
└───┴───────────────────────────────────────────────────┴─────┴─────────────┴──────────────────────────────────┘
(NOTE: There may be security-related changes not in this list. See https://github.com/aws/aws-cdk/issues/1299)
Resources
+ AWS::EC2::SecurityGroupIngress ALBFargateServiceSecurityGroupfromawsecsinteglbfargateSecurityGroupCFF5F77180898CD07C
~ AWS::EC2::SecurityGroup SecurityGroupDD263621
└─ ~ SecurityGroupEgress
└─ @@ -1,9 +1,7 @@
[
{
- "CidrIp": "255.255.255.255/32",
- "Description": "Disallow all traffic",
- "FromPort": 252,
- "IpProtocol": "icmp",
- "ToPort": 86
+ "CidrIp": "0.0.0.0/0",
+ "Description": "Allow all outbound traffic by default",
+ "IpProtocol": "-1"
}
]
~ AWS::ElasticLoadBalancingV2::LoadBalancer ALBFargateServiceLB64A0074E
└─ ~ SecurityGroups
└─ @@ -4,5 +4,11 @@
"ALBFargateServiceLBSecurityGroup5DC3060E",
"GroupId"
]
+ },
+ {
+ "Fn::GetAtt": [
+ "SecurityGroupDD263621",
+ "GroupId"
+ ]
}
]
Snapshot Results:
Tests: 1 failed, 1 total
Failed: /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js
Running integration tests for failed tests...
Running in parallel across regions: eu-west-2
Running test /workspaces/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.js in eu-west-2
SUCCESS aws-ecs-patterns/test/fargate/integ.l3-l3FargateTest/DefaultTest 1009.563s
NO ASSERTIONS
Test Results:
Tests: 1 passed, 1 total
issue
feat(ecs-patterns): support dualstack ALB in both ec2 and fargate
fix(ecs-patterns): resolve not being able to create ECS service in integ.alb-ecs-service-command-entry-point
code: zsh
$ docker pull public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest
latest: Pulling from ecs-sample-image/amazon-ecs-sample
31b3f1ad4ce1: Pull complete
fd42b079d0f8: Pull complete
30585fbbebc6: Pull complete
18f4ffdd25f4: Pull complete
9dc932c8fba2: Pull complete
600c24b8ba39: Pull complete
0e3bc9105e7b: Pull complete
Digest: sha256:7ebff78b7d7bd0cb13d462ecf4d9aaa6ea7571bd5548008163d0499eae2fbf40
Status: Downloaded newer image for public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest
public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest
What's Next?
View summary of image vulnerabilities and recommendations → docker scout quickview public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest
~/desktop
$ docker run -d --name ecs-sample-container public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
98a973d886ef01e9027c5a6d2dc045a4d050b84e906a8adb67762fe6e9fd46e6
~/desktop
$ docker exec -it ecs-sample-container /bin/sh
# /usr/sbin/apache2 -D FOREGROUND
/bin/sh: 1: /usr/sbin/apache2: not found
# ls
bin boot dev docker-entrypoint.d docker-entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# pwd
/
# exit
~/desktop
$ docker exec -it ecs-sample-container /bin/bash -l -c
/bin/bash: -c: option requires an argument
~/desktop
$ docker exec -it ecs-sample-container /bin/bash -l -c /usr/sbin/apache2 -D FOREGROUND
-D: line 1: /usr/sbin/apache2: No such file or directory
~/desktop
$ docker exec -it ecs-sample-container /bin/bash -l -c sleep 1000
sleep: missing operand
Try 'sleep --help' for more information.
~/desktop
$  docker exec -it ecs-sample-container /bin/bash -l -c "sleep 1000"
code: zsh
$ aws elbv2 describe-target-health --target-group-arn arn:aws:elasticloadbalancing:eu-west-2:xxxx:targetgroup/aws-ec-ALBEC-THXCQGB6J3RW/601b10fee1cdfc71 --region eu-west-2
{
"TargetHealthDescriptions": [
{
"Target": {
"Id": "i-002586bbfd1f02d84",
"Port": 32788
},
"HealthCheckPort": "32788",
"TargetHealth": {
"State": "unhealthy",
"Reason": "Target.Timeout",
"Description": "Request timed out"
}
},
{
"Target": {
"Id": "i-002586bbfd1f02d84",
"Port": 32787
},
"HealthCheckPort": "32787",
"TargetHealth": {
"State": "draining",
"Reason": "Target.DeregistrationInProgress",
"Description": "Target deregistration is in progress"
}
}
]
}
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/ec2/integ.alb-ecs-service-command-entry-point.ts
const cx_api_1 = require("aws-cdk-lib/cx-api");
const app = new cdk.App({ postCliContext: { cx_api_1.AUTOSCALING_GENERATE_LAUNCH_TEMPLATE: false } });
code: zsh
root@98a973d886ef:/usr/sbin# apt-get update && apt-get install -y net-tools lsof
root@98a973d886ef:/usr/sbin# lsof -i :80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1 root 7u IPv4 9802419 0t0 TCP *:80 (LISTEN)
nginx 1 root 8u IPv6 9802420 0t0 TCP *:80 (LISTEN)
https://github.com/aws-samples/ecs-demo-php-simple-app/blob/master/Dockerfile
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html
code: zsh
root@3f35beb5a668:/usr/local/apache2# httpd-foreground
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
issue
/workspaces/aws-cdk/packages/aws-cdk-lib/aws-ecs-patterns/test/fargate/load-balanced-fargate-service.test.ts refactor
refactor(ecs-patterns): organize hierarchy of describe in tests
issue
ApplicationLoadBalancedFargateService : Add Support for IPv6 on LB's
https://github.com/aws/aws-cdk/blob/a75f447d6dc9ad8b1b00a7faebdd8aadc3d25e28/packages/aws-cdk-lib/aws-ecs-patterns/README.md
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/ec2/integ.alb-ecs-service-ipaddress-type.ts
import * as autoscaling from 'aws-cdk-lib/aws-autoscaling';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as ecs from 'aws-cdk-lib/aws-ecs';
import * as cdk from 'aws-cdk-lib';
import * as integ from '@aws-cdk/integ-tests-alpha';
import * as ecsPatterns from 'aws-cdk-lib/aws-ecs-patterns';
import { AUTOSCALING_GENERATE_LAUNCH_TEMPLATE } from 'aws-cdk-lib/cx-api';
import { IpAddressType } from 'aws-cdk-lib/aws-elasticloadbalancingv2';
const app = new cdk.App({ postCliContext: { AUTOSCALING_GENERATE_LAUNCH_TEMPLATE: false } });
const stack = new cdk.Stack(app, 'aws-ecs-integ-alb-ec2-cmd-entrypoint');
// Create VPC and ECS Cluster
const vpc = new ec2.Vpc(stack, 'Vpc', { maxAzs: 2, restrictDefaultSecurityGroup: false });
const cluster = new ecs.Cluster(stack, 'Ec2Cluster', { vpc });
const provider = new ecs.AsgCapacityProvider(stack, 'CapacityProvier', {
autoScalingGroup: new autoscaling.AutoScalingGroup(
stack,
'AutoScalingGroup',
{
vpc,
instanceType: new ec2.InstanceType('t2.micro'),
machineImage: ecs.EcsOptimizedImage.amazonLinux2(),
},
),
capacityProviderName: 'test-capacity-provider',
});
cluster.addAsgCapacityProvider(provider);
// Create ALB service with Command and EntryPoint
new ecsPatterns.ApplicationLoadBalancedEc2Service(
stack,
'ALBECSServiceWithCommandEntryPoint',
{
cluster,
memoryLimitMiB: 512,
cpu: 256,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('amazon/amazon-ecs-sample'),
},
capacityProviderStrategies: [
{
capacityProvider: provider.capacityProviderName,
base: 1,
weight: 1,
},
],
ipAddressType: IpAddressType.DUAL_STACK,
},
);
new integ.IntegTest(app, 'AlbEc2ServiceWithCommandAndEntryPoint', {
testCases: stack,
});
app.synth();
code: zsh
❌ aws-ecs-integ-alb-ec2-cmd-entrypoint failed: Error: The stack named aws-ecs-integ-alb-ec2-cmd-entrypoint failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Resource handler returned message: "You must specify subnets with an associated IPv6 CIDR block. (Service: ElasticLoadBalancingV2, Status Code: 400, Request ID: 9ede78e6-777b-4eb6-a986-0973cb9a261e)" (RequestToken: b4744d0d-f89d-47c1-3f5e-f4131dbd24e8, HandlerErrorCode: InvalidRequest)
at FullCloudFormationDeployment.monitorDeployment (/workspaces/aws-cdk/packages/aws-cdk/lib/api/deploy-stack.js:252:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.deployStack (/workspaces/aws-cdk/packages/aws-cdk/lib/cdk-toolkit.js:232:32)
at async /workspaces/aws-cdk/packages/aws-cdk/lib/util/work-graph.js:88:21
❌ Deployment failed: Error: The stack named aws-ecs-integ-alb-ec2-cmd-entrypoint failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Resource handler returned message: "You must specify subnets with an associated IPv6 CIDR block. (Service: ElasticLoadBalancingV2, Status Code: 400, Request ID: 9ede78e6-777b-4eb6-a986-0973cb9a261e)" (RequestToken: b4744d0d-f89d-47c1-3f5e-f4131dbd24e8, HandlerErrorCode: InvalidRequest)
at FullCloudFormationDeployment.monitorDeployment (/workspaces/aws-cdk/packages/aws-cdk/lib/api/deploy-stack.js:252:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.deployStack (/workspaces/aws-cdk/packages/aws-cdk/lib/cdk-toolkit.js:232:32)
at async /workspaces/aws-cdk/packages/aws-cdk/lib/util/work-graph.js:88:21
packages/@aws-cdk-testing/framework-integ/test/aws-ec2/test/integ.vpc-ipv6.ts
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/ec2/integ.alb-ecs-service-ipaddress-type.ts
const vpc = new ec2.Vpc(stack, 'Vpc', {
maxAzs: 2,
restrictDefaultSecurityGroup: false,
ipProtocol: ec2.IpProtocol.DUAL_STACK,
subnetConfiguration: [
{
name: 'subnet1',
subnetType: ec2.SubnetType.PUBLIC,
ipv6AssignAddressOnCreation: true,
},
{
name: 'subnet2',
subnetType: ec2.SubnetType.PUBLIC,
ipv6AssignAddressOnCreation: true,
},
],
});
code: zsh
❌ aws-ecs-integ-alb-ec2-ipaddress-type failed: Error: The stack named aws-ecs-integ-alb-ec2-ipaddress-type failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Resource handler returned message: "A load balancer cannot be attached to multiple subnets in the same Availability Zone (Service: ElasticLoadBalancingV2, Status Code: 400, Request ID: 6b6fe297-c528-4daa-84b0-5009aacd529a)" (RequestToken: dcd47275-9cb0-40a0-80e6-207c1ded2305, HandlerErrorCode: InvalidRequest)
at FullCloudFormationDeployment.monitorDeployment (/workspaces/aws-cdk/packages/aws-cdk/lib/api/deploy-stack.js:252:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.deployStack (/workspaces/aws-cdk/packages/aws-cdk/lib/cdk-toolkit.js:232:32)
at async /workspaces/aws-cdk/packages/aws-cdk/lib/util/work-graph.js:88:21
パブリック有効
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/ec2/integ.alb-ecs-service-ipaddress-type.ts
mapPublicIpOnLaunch: true,
ipv6AssignAddressOnCreation: true,
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.alb-fargate-service-ipaddress-type.ts
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as ecs from 'aws-cdk-lib/aws-ecs';
import * as cdk from 'aws-cdk-lib';
import * as integ from '@aws-cdk/integ-tests-alpha';
import * as ecsPatterns from 'aws-cdk-lib/aws-ecs-patterns';
import { AUTOSCALING_GENERATE_LAUNCH_TEMPLATE } from 'aws-cdk-lib/cx-api';
import { IpAddressType } from 'aws-cdk-lib/aws-elasticloadbalancingv2';
const app = new cdk.App({ postCliContext: { AUTOSCALING_GENERATE_LAUNCH_TEMPLATE: false } });
const stack = new cdk.Stack(app, 'aws-ecs-integ-alb-fargate-ipaddress-type');
// Create VPC and ECS Cluster
const vpc = new ec2.Vpc(stack, 'Vpc', {
maxAzs: 2,
restrictDefaultSecurityGroup: false,
ipProtocol: ec2.IpProtocol.DUAL_STACK,
subnetConfiguration: [
{
name: 'subnet',
subnetType: ec2.SubnetType.PUBLIC,
mapPublicIpOnLaunch: true,
ipv6AssignAddressOnCreation: true,
},
],
});
const cluster = new ecs.Cluster(stack, 'FargateCluster', { vpc });
// Create ALB service with ipAddressType
new ecsPatterns.ApplicationLoadBalancedFargateService(
stack,
'AlbFargateServiceWithIpAddressType',
{
cluster,
memoryLimitMiB: 512,
cpu: 256,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('amazon/amazon-ecs-sample'),
},
ipAddressType: IpAddressType.DUAL_STACK,
},
);
new integ.IntegTest(app, 'AlbFargateServiceWithIpAddressType', {
testCases: stack,
});
app.synth();
code: zsh
CannotPullContainerError: pull image manifest has been retried 5 time(s): failed to resolve ref docker.io/amazon/amazon-ecs-sample:latest: failed to do request: Head "https://registry-1.docker.io/v2/amazon/amazon-ecs-sample/manifests/latest": dial tcp 2600:1f18:2148:bc02:445d:9ace:d20b:c303:443: i/o timeout
https://scrapbox.io/files/65d52134cfe07e00265c3f43.png
code:packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.alb-fargate-service-ipaddress-type.ts
assignPublicIp: true,
code: zsh
$ yarn integ aws-ecs-patterns/test/ec2/integ.application-load-balanced-ecs-service.js --update-on-failed --parallel-regions eu-west-1
yarn run v1.22.19
$ integ-runner --language javascript aws-ecs-patterns/test/ec2/integ.application-load-balanced-ecs-service.js --update-on-failed --parallel-regions eu-west-1
Verifying integration test snapshots...
CHANGED aws-ecs-patterns/test/ec2/integ.application-load-balanced-ecs-service 0.894s
Resources
~ AWS::ElasticLoadBalancingV2::LoadBalancer myServiceLB168895E1
└─ + IpAddressType
└─ ipv4
Snapshot Results:
Tests: 1 failed, 1 total
Failed: /Users/herotaka/Desktop/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/ec2/integ.application-load-balanced-ecs-service.js
Running integration tests for failed tests...
Running in parallel across regions: eu-west-1
Running test /Users/herotaka/Desktop/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/ec2/integ.application-load-balanced-ecs-service.js in eu-west-1
issue
(ecs-patterns): NetworkLoadBalancedServiceBase does not support container port mapping using taskDefinition
Difference between targetPort and port in Kubernetes Service definition
code: packages/aws-cdk-lib/aws-ecs-patterns/lib/base/network-load-balanced-service-base.ts
/**
* Target Group port of the network load balancer that will send requests to registered target
*
* @default 80
*/
readonly targetPort?: number;
const targetProps = {
port: props.targetPort ?? props.taskImageOptions?.containerPort ?? 80,
};
code: packages/@aws-cdk-testing/framework-integ/test/aws-ecs-patterns/test/fargate/integ.l3.ts
targetPort: 80,
code: packages/aws-cdk-lib/aws-ecs-patterns/README.md
Default port for target group is 80, but if targetPort is specified, it is evaluated first, and if taskImageOptions.containerPort is specified, it is evaluated next.
In the example below, the port for target group is 82.
`ts
declare const cluster: ecs.Cluster;
const loadBalancedFargateService = new ecsPatterns.NetworkLoadBalancedFargateService(this, 'Service', {
cluster,
memoryLimitMiB: 1024,
cpu: 512,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
containerPort: 81,
},
targetPort: 82,
});
`
code: packages/aws-cdk-lib/aws-ecs-patterns/test/fargate/load-balanced-fargate-service-v2.test.ts
test('Fargate networkloadbalanced target group uses 80 as default port', () => {
// GIVEN
const stack = new Stack();
const vpc = new Vpc(stack, 'VPC');
const cluster = new ecs.Cluster(stack, 'Cluster', { vpc });
const taskDefinition = new ecs.FargateTaskDefinition(stack, 'FargateTaskDef');
taskDefinition.addContainer('Container', {
image: ContainerImage.fromRegistry('public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest'),
portMappings: containerPort: 81 },
});
new NetworkLoadBalancedFargateService(stack, 'NLBService', {
cluster: cluster,
memoryLimitMiB: 1024,
cpu: 512,
taskDefinition,
listenerPort: 8181,
});
Template.fromStack(stack).hasResourceProperties('AWS::ElasticLoadBalancingV2::TargetGroup', {
Port: 80,
Protocol: 'TCP',
TargetType: 'ip',
VpcId: {
Ref: 'VPCB9E5F0B4',
},
});
});
test('Fargate networkloadbalanced target group uses targetPort when targetPort is set', () => {
// GIVEN
const stack = new Stack();
const vpc = new Vpc(stack, 'VPC');
const cluster = new ecs.Cluster(stack, 'Cluster', { vpc });
const taskDefinition = new ecs.FargateTaskDefinition(stack, 'FargateTaskDef');
taskDefinition.addContainer('Container', {
image: ContainerImage.fromRegistry('public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest'),
portMappings: containerPort: 80 },
});
new NetworkLoadBalancedFargateService(stack, 'NLBService', {
cluster: cluster,
memoryLimitMiB: 1024,
cpu: 512,
taskDefinition,
targetPort: 81,
listenerPort: 8181,
});
Template.fromStack(stack).hasResourceProperties('AWS::ElasticLoadBalancingV2::TargetGroup', {
Port: 81,
Protocol: 'TCP',
TargetType: 'ip',
VpcId: {
Ref: 'VPCB9E5F0B4',
},
});
});
issue
ssm: unable to use serialized YAML string values as content for CfnDocument
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-document.html#cfn-ssm-document-content
親元
ssm.generated.ts
issue
AutoScalingGroup: LaunchTemplateOverrides is missing the InstanceRequirements Attribute in AWS CDK L2 construct
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-autoscaling-autoscalinggroup-launchtemplateoverrides.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-autoscaling-autoscalinggroup-instancerequirements.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-mixed-instances-group-attribute-based-instance-type-selection.html
code: asg.yaml
Resources:
MyLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: ami-0e347cff037f057c4
InstanceType: t2.micro
SecurityGroups:
- sg-0e2bd2a94e32529fb
MyAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MinSize: 1
MaxSize: 1
LaunchConfigurationName: !Ref MyLaunchConfig
VPCZoneIdentifier:
- subnet-0f233ab1a8c920b74
https://scrapbox.io/files/65855c5559c852002371c2a0.png
code: asg.yaml
MyAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MinSize: 1
MaxSize: 1
LaunchConfigurationName: !Ref MyLaunchConfig
VPCZoneIdentifier:
- subnet-0f233ab1a8c920b74
MixedInstancesPolicy:
LaunchTemplate:
Overrides:
- InstanceType: t2.small
https://scrapbox.io/files/65855cf63bcd9d0024751f1d.png
code: asg.yaml
Resources:
MyLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateData:
ImageId: ami-0e347cff037f057c4
InstanceType: t2.micro
MyAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MinSize: 1
MaxSize: 1
LaunchTemplate:
LaunchTemplateId: !Ref MyLaunchTemplate
Version: !GetAtt MyLaunchTemplate.LatestVersionNumber
https://scrapbox.io/files/65855e32dee8e2002388c567.png
code: asg.yaml
Resources:
MyLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateData:
ImageId: ami-0e347cff037f057c4
InstanceType: t2.micro
MyAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MinSize: 1
MaxSize: 1
MixedInstancesPolicy:
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateId: !Ref MyLaunchTemplate
Version: !GetAtt MyLaunchTemplate.LatestVersionNumber
Overrides:
- InstanceType: t2.small
https://scrapbox.io/files/65855e32dee8e2002388c567.png
code: asg.yaml
Overrides:
- InstanceType: t2.small
- InstanceRequirements:
VCpuCount:
Max: 10
Min: 2
MemoryMiB:
Min: 4096
code: console
Resource handler returned message: "You cannot mix single instance and instance requirements launch template override in the same mixed instance policy. Remove the launch template overrides that use single instance types, or the ones using instance requirements. (Service: AutoScaling, Status Code: 400, Request ID: e3d0027e-bfee-4a3c-981b-54d985fe41a4)" (RequestToken: 27a9b623-cdea-9df5-6a6f-0833dc46e815, HandlerErrorCode: InvalidRequest)
issue
core: Allow user to exclude resource tagging changes from the cdk diff
DevContainer試す
code: zsh
~/desktop
$ sudo chmod 744 ~/.docker/buildx/current
Password:
~/desktop
$ ls -la ~/.docker/buildx/current
-rwxr--r-- 1 root staff 48 3 22 03:35 /Users/wafuwafu13/.docker/buildx/current
code: zsh
aws-cdk git:(main) NODE_OPTIONS="--max-old-space-size=8192" npx lerna run build --scope=aws-cdk-lib
Build times for aws-cdk-lib: Total time (6m40.4s) | /workspaces/aws-cdk/tools/@aws-cdk/cdk-build-tools/node_modules/jsii/bin/jsii (4m7.1s) | ts-node ./scripts/verify-imports-resolve-same.ts && ts-node ./scripts/verify-imports-shielded.ts && ts-node ./cx-api/build-tools/flag-report.ts (19.3s) | ts-node -P tsconfig.dev.json scripts/gen.ts (9.2s) | npx ts-node -P tsconfig.dev.json region-info/build-tools/generate-static-data.ts && (cp -f $(node -p 'require.resolve("aws-sdk/apis/metadata.json")') custom-resources/lib/aws-custom-resource/sdk-api-metadata.json && rm -rf custom-resources/test/aws-custom-resource/cdk.out) && (rm -rf core/test/fs/fixtures && cd core/test/fs && tar -xzf fixtures.tar.gz) && (rm -rf assets/test/fs/fixtures && cd assets/test/fs && tar -xzvf fixtures.tar.gz) (4.0s)
———————————————————————————————————————————————————————————————————————————————————————————————
Lerna (powered by Nx) Successfully ran target build for project aws-cdk-lib and 8 tasks it depends on (10m)
Nx read the output from the cache instead of running the command for 8 out of 9 tasks.
code: zsh
aws-cdk-lib git:(main) NODE_OPTIONS="--max-old-space-size=8192" yarn build
Build times for aws-cdk-lib: Total time (6m34.4s) | /workspaces/aws-cdk/tools/@aws-cdk/cdk-build-tools/node_modules/jsii/bin/jsii (3m58.9s) | ts-node ./scripts/verify-imports-resolve-same.ts && ts-node ./scripts/verify-imports-shielded.ts && ts-node ./cx-api/build-tools/flag-report.ts (18.9s) | ts-node -P tsconfig.dev.json scripts/gen.ts (8.6s) | npx ts-node -P tsconfig.dev.json region-info/build-tools/generate-static-data.ts && (cp -f $(node -p 'require.resolve("aws-sdk/apis/metadata.json")') custom-resources/lib/aws-custom-resource/sdk-api-metadata.json && rm -rf custom-resources/test/aws-custom-resource/cdk.out) && (rm -rf core/test/fs/fixtures && cd core/test/fs && tar -xzf fixtures.tar.gz) && (rm -rf assets/test/fs/fixtures && cd assets/test/fs && tar -xzvf fixtures.tar.gz) (4.2s)
Done in 394.77s.
Dockerの容量UPで解決
code: zsh
Killed
Error: /workspaces/aws-cdk/tools/@aws-cdk/cdk-build-tools/node_modules/jsii/bin/jsii --silence-warnings=reserved-word --add-deprecation-warnings --compress-assembly '--strip-deprecated /workspaces/aws-cdk/deprecated_apis.txt' exited with error code 137
Build failed.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
npm ERR! Lifecycle script build failed with error:
npm ERR! Error: command failed
npm ERR! in workspace: aws-cdk-lib@0.0.0
npm ERR! at location: /workspaces/aws-cdk/packages/aws-cdk-lib
———————————————————————————————————————————————————————————————————————————
Lerna (powered by Nx) Ran target build for project aws-cdk-lib and 8 task(s) they depend on (7m)
✖ 1/9 failed
✔ 8/9 succeeded 0 read from cache
PR
fix(core): prevent the error when the condition is split into groups of 10 and 1 in Fn.conditionOr()
code: cfn-fn.ts
if (conditions.length === 1) {
console.dir(conditions0)
console.dir((conditions0 as any)'value''Fn::Or'0)
console.dir((conditions0 as any)'value''Fn::Or'1)
return conditions0 as ICfnRuleConditionExpression;
}
● Console
console.dir
FnOr {
creationStack: 'stack traces disabled' ,
value: { 'Fn::Or': [ FnOr, FnOr ] },
typeHint: 'string',
disambiguator: true
}
at Function.dir as conditionOr (core/lib/cfn-fn.ts:335:15)
console.dir
FnOr {
creationStack: 'stack traces disabled' ,
value: {
'Fn::Or': [
FnEquals, FnEquals,
FnEquals, FnEquals,
FnEquals, FnEquals,
FnEquals, FnEquals,
FnEquals
]
},
typeHint: 'string',
disambiguator: true
}
at Function.dir as conditionOr (core/lib/cfn-fn.ts:336:15)
console.dir
FnOr {
creationStack: 'stack traces disabled' ,
value: { 'Fn::Or': [ FnEquals, FnEquals ] },
typeHint: 'string',
disambiguator: true
}
code: .ts
public static conditionOr(...conditions: ICfnConditionExpression[]): ICfnRuleConditionExpression {
if (conditions.length === 0) {
throw new Error('Fn.conditionOr() needs at least one argument');
}
if (conditions.length === 1) {
return conditions0 as ICfnRuleConditionExpression;
}
// prevent the error "Fn::Or object requires a list of at least 2" when the condition is split into groups of 10 and 1
if (conditions.length > 10) {
return Fn.conditionOr(..._inGroupsOf(conditions, 10).map(group => Fn.conditionOr(...group)));
}
return Fn.conditionOr(..._inGroupsOf(conditions, 10).map(group => new FnOr(...group)));
}
code: .ts
test('condition length is 10n + 1 in Fn.conditionOr', () => {
// GIVEN
const stack = new cdk.Stack();
const expression = cdk.Fn.conditionOr(
cdk.Fn.conditionEquals('a', '1'),
cdk.Fn.conditionEquals('b', '2'),
cdk.Fn.conditionEquals('c', '3'),
cdk.Fn.conditionEquals('d', '4'),
cdk.Fn.conditionEquals('e', '5'),
cdk.Fn.conditionEquals('f', '6'),
cdk.Fn.conditionEquals('g', '7'),
cdk.Fn.conditionEquals('h', '8'),
cdk.Fn.conditionEquals('i', '9'),
cdk.Fn.conditionEquals('j', '10'),
cdk.Fn.conditionEquals('k', '11'),
);
// WHEN
new cdk.CfnCondition(stack, 'Condition', { expression });
// THEN
expect(toCloudFormation(stack)).toEqual({
Conditions: {
Condition: {
'Fn::Or': [
{
'Fn::Or': [
{ 'Fn::Equals': 'a', '1' },
{ 'Fn::Equals': 'b', '2' },
{ 'Fn::Equals': 'c', '3' },
{ 'Fn::Equals': 'd', '4' },
{ 'Fn::Equals': 'e', '5' },
{ 'Fn::Equals': 'f', '6' },
{ 'Fn::Equals': 'g', '7' },
{ 'Fn::Equals': 'h', '8' },
{ 'Fn::Equals': 'i', '9' },
{ 'Fn::Equals': 'j', '10' },
],
},
{
'Fn::Equals': 'k', '11',
},
],
},
},
});
});
test('condition length is more than 10 in Fn.conditionOr', () => {
// GIVEN
const stack = new cdk.Stack();
const expression = cdk.Fn.conditionOr(
cdk.Fn.conditionEquals('a', '1'),
cdk.Fn.conditionEquals('b', '2'),
cdk.Fn.conditionEquals('c', '3'),
cdk.Fn.conditionEquals('d', '4'),
cdk.Fn.conditionEquals('e', '5'),
cdk.Fn.conditionEquals('f', '6'),
cdk.Fn.conditionEquals('g', '7'),
cdk.Fn.conditionEquals('h', '8'),
cdk.Fn.conditionEquals('i', '9'),
cdk.Fn.conditionEquals('j', '10'),
cdk.Fn.conditionEquals('k', '11'),
cdk.Fn.conditionEquals('l', '12'),
);
// WHEN
new cdk.CfnCondition(stack, 'Condition', { expression });
// THEN
expect(toCloudFormation(stack)).toEqual({
Conditions: {
Condition: {
'Fn::Or': [
{
'Fn::Or': [
{ 'Fn::Equals': 'a', '1' },
{ 'Fn::Equals': 'b', '2' },
{ 'Fn::Equals': 'c', '3' },
{ 'Fn::Equals': 'd', '4' },
{ 'Fn::Equals': 'e', '5' },
{ 'Fn::Equals': 'f', '6' },
{ 'Fn::Equals': 'g', '7' },
{ 'Fn::Equals': 'h', '8' },
{ 'Fn::Equals': 'i', '9' },
{ 'Fn::Equals': 'j', '10' },
],
},
{
'Fn::Or': [
{ 'Fn::Equals': 'k', '11' },
{ 'Fn::Equals': 'l', '12' },
],
},
],
},
},
});
});
issue
Node.of
issue
test(s3): add neither arn nor name are provided case
parseBucketName -> undefined
code: zsh
~/Desktop/aws-cdk/packages/aws-cdk-lib
$ yarn test
...
=============================== Coverage summary ===============================
Statements : 57.43% ( 59305/103261 )
Branches : 40.79% ( 16576/40633 )
Functions : 66.88% ( 11441/17105 )
Lines : 58.86% ( 57366/97449 )
================================================================================
Summary of all failing tests
FAIL custom-resources/test/aws-custom-resource/runtime/index.test.js
● SDK global credentials are never set
expect(received).toBeNull()
Received: {"accessKeyId": "AKIAXYXV522WQUUSSPE3", "disableAssumeRole": true, "expireTime": null, "expired": false, "filename": undefined, "httpOptions": null, "preferStaticCredentials": false, "profile": "default", "refreshCallbacks": [], "sessionToken": undefined, "tokenCodeFn": null}
67 | // THEN
68 | expect(AWS.config).toBeInstanceOf(AWS.Config);
69 | expect(AWS.config.credentials).toBeNull();
| ^
70 | });
71 |
72 | test('SDK credentials are not persisted across subsequent invocations', async () => {
at Object.toBeNull (custom-resources/test/aws-custom-resource/runtime/index.test.ts:69:34)
FAIL aws-lambda-nodejs/test/docker.test.js
● esbuild is available
expect(received).toBe(expected) // Object.is equality
Expected: 0
Received: 1
6 | const process = spawnSync(docker, 'build', '-t', 'esbuild', path.join(__dirname, '../lib'), { stdio: 'inherit' });
7 | expect(process.error).toBeUndefined();
8 | expect(process.status).toBe(0);
| ^
9 | });
10 |
11 | test('esbuild is available', () => {
at Object.toBe (aws-lambda-nodejs/test/docker.test.ts:8:26)
● can npm install with non root user
expect(received).toBe(expected) // Object.is equality
Expected: 0
Received: 1
6 | const process = spawnSync(docker, 'build', '-t', 'esbuild', path.join(__dirname, '../lib'), { stdio: 'inherit' });
7 | expect(process.error).toBeUndefined();
8 | expect(process.status).toBe(0);
| ^
9 | });
10 |
11 | test('esbuild is available', () => {
at Object.toBe (aws-lambda-nodejs/test/docker.test.ts:8:26)
● can yarn install with non root user
expect(received).toBe(expected) // Object.is equality
Expected: 0
Received: 1
6 | const process = spawnSync(docker, 'build', '-t', 'esbuild', path.join(__dirname, '../lib'), { stdio: 'inherit' });
7 | expect(process.error).toBeUndefined();
8 | expect(process.status).toBe(0);
| ^
9 | });
10 |
11 | test('esbuild is available', () => {
at Object.toBe (aws-lambda-nodejs/test/docker.test.ts:8:26)
● can pnpm install with non root user
expect(received).toBe(expected) // Object.is equality
Expected: 0
Received: 1
6 | const process = spawnSync(docker, 'build', '-t', 'esbuild', path.join(__dirname, '../lib'), { stdio: 'inherit' });
7 | expect(process.error).toBeUndefined();
8 | expect(process.status).toBe(0);
| ^
9 | });
10 |
11 | test('esbuild is available', () => {
at Object.toBe (aws-lambda-nodejs/test/docker.test.ts:8:26)
● cache folders have the right permissions
expect(received).toBe(expected) // Object.is equality
Expected: 0
Received: 1
6 | const process = spawnSync(docker, 'build', '-t', 'esbuild', path.join(__dirname, '../lib'), { stdio: 'inherit' });
7 | expect(process.error).toBeUndefined();
8 | expect(process.status).toBe(0);
| ^
9 | });
10 |
11 | test('esbuild is available', () => {
at Object.toBe (aws-lambda-nodejs/test/docker.test.ts:8:26)
Test Suites: 2 failed, 3 skipped, 650 passed, 652 of 655 total
Tests: 6 failed, 43 skipped, 10061 passed, 10110 total
Snapshots: 20 passed, 20 total
Time: 735.335 s
Ran all test suites.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
issue
code: zsh
console.warn
WARNING aws-cdk-lib.aws_ec2.MachineImage#latestAmazonLinux is deprecated.
use MachineImage.latestAmazonLinux2 instead
This API will be removed in the next major release.
84790 | throw new DeprecationError(message);
84791 | case "warn":
84792 | console.warn("WARNING", message);
| ^
84793 | break;
84794 | }
84795 | }
issue
code: vpn.ts
/**
* Dummy member
* TODO: remove once https://github.com/aws/jsii/issues/231 is fixed
*/
fix(jsii): Correctly handle singleton enums
When an enum has only one option, TypeScript handles it in a special way
and tries very hard to hide the enum declaration in favor of the sole
member. This caused incorrect type names and kinds to be emitted in the
JSII assembly, resulting in incorrect behavior.
chore: upgrade to jsii@5.0.2
issue
https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md
Examples:
integ.destinations.ts
integ.put-events.ts <- not found