MongoS3ContentStorage is an implementation of the @lumieducation/h5p-server!IContentStorage interface that uses MongoDB to store the parameters and metadata of content objects and a S3-compatible storage system to store files (images, video, audio etc.).
Note: You must create the S3 bucket manually before it can be used by MongoS3ContentStorage!
The implementation depends on these npm packages:
You must add them manually to your application using npm install aws-sdk mongodb
!
You must import the storage implementation via a submodule:
import {
MongoS3ContentStorage,
initS3,
initMongo
} from '@lumieducation/h5p-mongos3';
Initialize the storage implementation like this:
const storage = new MongoS3ContentStorage(
initS3({
credentials: {
accessKeyId: 's3accesskey', // optional if env. variable is set
secretAccessKey: 's3accesssecret' // optional if env. variable is set
},
endpoint: 'http://127.0.0.1:9000', // optional if env. variable is set
region: 'us-east-1', // optional if env. variable is set,
forcePathStyle: true
}),
(
await initMongo(
'mongodb://127.0.0.1:27017', // optional if env. variable is set
'testdb1', // optional if env. variable is set
'root', // optional if env. variable is set
'h5pnodejs' // optional if env. variable is set
)
).collection('h5p'),
{ s3Bucket: 'h5pcontentbucket' }
);
aws-sdk
npm
package.mongodb
npm package.initS3
and
initMongo
through the function parameters. Alternatively you can use these
environment variables instead of using the function parameters:
h5p
to any name you want. If the
collection doesn't exist yet, it will be automatically created.h5pcontentbucket
to any name you want, but you
must specify one. You must create the bucket manually before you can use it.aws-sdk
,
so you can set any custom configuration values you want.maxKeyLength
to the value you need.
It defaults to 1024.The example Express application can be configured to use the MongoDB/S3 storage by setting the environment variables from above and these additional variables:
An example call would be:
CONTENTSTORAGE=mongos3 AWS_ACCESS_KEY_ID=minioaccesskey AWS_SECRET_ACCESS_KEY=miniosecret AWS_S3_ENDPOINT="http://127.0.0.1:9000" MONGODB_URL="mongodb://127.0.0.1:27017" MONGODB_DB=testdb1 MONGODB_USER=root MONGODB_PASSWORD=h5pnodejs CONTENT_AWS_S3_BUCKET=testbucket1 CONTENT_MONGO_COLLECTION=h5p npm start
By default the storage implementation allows all users read and write access to all data! It is very likely that this is not something you want! You can add a function to the options object of the constructor of MongoS3ContentStorage to customize access restrictions:
getPermissions = (contentId: ContentId, user: IUser) => Promise<Permission[]>;
The function receives the contentId of the object that is being accessed and the user who is trying to access it. It must return a list of permissions the user has on this object. Your implementation of this function will probably be an adapter that hooks into your rights and permission system.
In the default setup all resources used by H5P content in the player (images, video, ...) will be requested from the H5P server. The H5P server in turn will request the resources from S3 and relay the results. This means that in a high load scenario, there will be a lot of load on the H5P server to serve these static files. You can improve scalability by setting up the player to load content resources directly from the S3 bucket. For this you must grant read access on the bucket to anonymous users. If you have content that must not be accessible to the public (for e.g. copyright reasons), this is probably not an option.
This currently only works for the player, not for the editor. Because of this you must still serve the 'get content file' route to make sure the editor can work with resources correctly.
Steps:
Grant read-only permission to anonymous users for your bucket with bucket policies. See the AWS documentation for details.
Set the configuration option contentFilesUrlPlayerOverride
to point to your
S3 bucket. The URL must also include the contentID of the object. For this,
you must add the placeholder {{contentId}}
to the configuration value.
Examples:
contentFilesUrlPlayerOverride = 'https://bucket.s3server.com/{{contentId}}';
// or
contentFilesUrlPlayerOverride = 'https://s3server.com/bucket/{{contentId}}';
There are automated tests in
/test/implementation/db/MongoS3ContentStorage.test.ts
.
However, these tests will not be called automatically when you run npm run test
or other test calls. The reason is that the tests require a running
MongoDB and S3 instance and thus need more extensive setup. To manually execute
the tests call npm run test:h5p-mongos3
.
To quickly get a functioning MongoDB and S3 instance, you can use the Docker Compose file in the scripts directory like this (you obviously must install Docker and Docker Compose first):
docker-compose -f scripts/mongo-s3-docker-compose.yml up -d
This will start a MongoDB server and MinIO instance in containers. Note that the instances will now be started when your system boots. To stop them from doing this and completely wipe all files from your system, execute:
docker-compose -f scripts/mongo-s3-docker-compose.yml down -v
The MinIO instance will not include a bucket by default. You can create one with the GUI tool "S3 Browser", for example, or with the AWS CLI.