Public
Like
ContextualLite
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Viewing readonly version of main branch: v237View latest version
This migration addresses the "Query key condition not supported" error by restructuring the DynamoDB table schema from a single partition key to a composite key structure.
- Partition Key:
pk
containing${namespace}:${userKey}
- Query Pattern: Used invalid
begins_with(pk, :prefix)
in KeyConditionExpression
- Partition Key:
pk
containing just the namespace - Sort Key:
sk
containing the user key - Query Pattern: Uses valid
pk = :namespace AND begins_with(sk, :prefix)
-
Create a new DynamoDB table with the correct schema:
aws dynamodb create-table \ --table-name your-new-table-name \ --attribute-definitions \ AttributeName=pk,AttributeType=S \ AttributeName=sk,AttributeType=S \ --key-schema \ AttributeName=pk,KeyType=HASH \ AttributeName=sk,KeyType=RANGE \ --billing-mode PAY_PER_REQUEST -
Update your environment variable:
DYNAMODB_TABLE=your-new-table-name -
Optional: Migrate existing data using the migration script below.
If you have existing data in the old format, use this migration script:
// Migration script to convert old format to new format
async function migrateKVData() {
const AWS = require('@aws-sdk/client-dynamodb');
const { DynamoDBDocumentClient, ScanCommand, PutCommand, DeleteCommand } = require('@aws-sdk/lib-dynamodb');
const client = DynamoDBDocumentClient.from(new AWS.DynamoDBClient({
region: 'us-east-1',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key'
}
}));
const oldTableName = 'your-old-table';
const newTableName = 'your-new-table';
// Scan all items from old table
const scanResult = await client.send(new ScanCommand({
TableName: oldTableName
}));
for (const item of scanResult.Items || []) {
// Parse old format: pk = "namespace:userkey"
const oldPk = item.pk;
const colonIndex = oldPk.indexOf(':');
if (colonIndex === -1) continue; // Skip invalid items
const namespace = oldPk.substring(0, colonIndex);
const userKey = oldPk.substring(colonIndex + 1);
// Create new format item
const newItem = {
pk: namespace, // Just the namespace
sk: userKey, // Just the user key
data: item.data,
created: item.created,
updated: item.updated
};
if (item.ttl) {
newItem.ttl = item.ttl;
}
// Put item in new table
await client.send(new PutCommand({
TableName: newTableName,
Item: newItem
}));
console.log(`Migrated: ${namespace}:${userKey}`);
}
console.log('Migration completed!');
}
{ "pk": "user123:settings", "data": {"theme": "dark"}, "created": "2024-01-01T00:00:00Z", "updated": "2024-01-01T00:00:00Z" }
{ "pk": "user123", "sk": "settings", "data": {"theme": "dark"}, "created": "2024-01-01T00:00:00Z", "updated": "2024-01-01T00:00:00Z" }
- Correct DynamoDB Usage: Uses proper KeyConditionExpression syntax
- Efficient Queries: Leverages DynamoDB's composite key capabilities
- Better Performance: Query operations are more efficient than Scan
- Namespace Isolation: Each user's data is efficiently partitioned
- Prefix Matching: Supports efficient prefix queries within namespaces
After migration, test the KV operations:
# Test kv-list (this was failing before) curl -X POST https://your-val-town-url.web.val.run/mcp \ -H "Content-Type: application/json" \ -H "Authorization: Bearer your-aws-secret-access-key" \ -d '{ "jsonrpc": "2.0", "method": "tools/call", "params": { "name": "kv-list", "arguments": { "prefix": "config" } }, "id": 1 }'
If this returns results without the "Query key condition not supported" error, the migration was successful.
If you need to rollback:
- Keep the old table until you're confident the new structure works
- Update the
DYNAMODB_TABLE
environment variable back to the old table name - Revert the code changes in
kv-store.ts
If you encounter issues during migration:
- Check that your new table has the correct schema (pk as HASH, sk as RANGE)
- Verify your AWS permissions include Query operations
- Test with a simple kv-put and kv-get operation first
- Check the CloudWatch logs for detailed error messages