The road to decentralizing executions.

Enabling applications to execute and run through decentralized networks is the last major step on the path to completing our vision of building a decentralized network of services.

With decentralized executions, apps can stay always available and ready to react, entirely using decentralized cloud computing. This functionality atop MESG’s already-existing tools will allow users to merge the development and hosting of applications into one.

We’re enabling decentralized executions in two branches, both being built simultaneously. The first optimizes the interaction between services with Orchestrator, and the second decentralizes data throughout the Network as efficiently and securely as possible.

Branch 1: Orchestrator

In order to build decentralized, autonomous processes, data needs to be secured, simplified and easily accessible.

1. Simplifying the service output

First, services need to achieve the same output regardless of if they are deployed on your computer, or from another source.

Because data and processing are spread throughout the network on various types of machines, outputs need to be as simple and versatile as possible. So this is one foundational step that is required before we build further.

2. Chain of executions

With data dispersed throughout the decentralized network, mechanisms to secure data are required. Link lists will be used to link executions together in order to build secure processes.

Linked lists are similar to blockchains in that they link a validated execution to the previous execution through cryptographic functions. With one execution cryptographically attached to the previous execution, secure executions can be created in a path to form and enable processes.

Each time you want to execute something, you create another list item and follow that item with all executions that you want to occur down a specific path. Depending on the variables set, multiple trees with various paths can be created.

3. Building an API on top of executions

Accessibility and maintainability are crucial to building efficient applications, so we need a proper API to allow for easy access to our linked list of executions.

Currently, event and task executions are two separate things. With the introduction of processes, and with the goal to decentralize executions, we need to have an execution that links events and task executions together in a way that any node can have access to the same data, and that this data should be sufficient to process, reach consensus and verify the execution.

Executions should not be possible without a process. Executions will be triggered because of an event or a result of a previous execution. We need to make sure that we can trace any execution back to its beginning, an event and a list of results.

4. Process implementation

Processes are at the core of MESG’s decentralized executions. Its functionality will be implemented in increasing degrees of complexity. At first, we’ll implement a tree graph of executions but may move to a full DAG at a later time.

The Tree graph implementation is possible thanks to the execution structure that references the previous execution. Each new execution creates new execution data that points to the previous one (the result that allows this execution to trigger).

Only a single event can start a process, then once triggered, processes can contain multiple chains of results. It lets users create more complex processes, and build new trees with additional branches.

Here’s an example:

5. Merging processes and services together

At this stage, processes will be merged with services, so they will be able to build apps by sending tasks or events either from itself or from external services.

A service’s process can execute another service’s task, listen for events from other services, execute receive orders to execute tasks and emit events for other services. When services with external dependencies are started, all external dependencies will also start and connect to them.

6. Process data resolver

To create more complex processes, the process data resolver will be introduced allowing users to choose which inputs they’d like to use or not use.

Branch 2: Network

While Orchestrator is designed to powerfully simplify applications, the decentralized network must be built to ensure data is appropriately distributed with the ideal balance of security, efficiency, and availability.

1. Creating an Instance database

It’s crucial that services running in a decentralized environment do not reveal sensitive data, so we are splitting our service database into two databases: an instance database, and a service database.

The service database will only store the service definitions. Its primary index is the hash of the service definition, calculated by the Engine, called the Service Hash.

The Instance database will store the info of the actual running services (docker services / containers IDs, networks IDs, but also service definition hash). Its primary index is the hash of the service definition, plus the custom environment, calculated by the Engine, called Instance Hash.

The custom environments are not stored in any database but are used for the calculation of the instance hash and injected in the docker service upon starting. This is done to ensure that multiple nodes are running the same service with the same configuration, without knowing the actual configuration (env variables). The MESG Engine doesn’t store env variables as they can contain sensitive data (e.g. login/password, private keys, API keys, etc.)

2. Decentralizing the databases

Data in the process, instance, and service databases will be synchronized and validated across the network using Cosmos SDK and its Tendermint BFT consensus mechanism.

Each modification of data will have to be validated by at least ⅔ of all validator nodes to be synchronized on the entire network. This ensures security, replication and availability of the data.

At this point, executions will be able to run in a trusted decentralized environment.

3. Trustless decentralized executions

Here, a validation step for executions will be added. Validations will be made by requiring the result of an execution from a single node, to be validated against many other nodes.

From this point on, executions can run in a fully trustless decentralized environment.

Some service’s tasks will not be able to be validated because of their nature (e.g.: writing data from a private database, generating a random number, etc.), so the validation step of execution will be optional to allow a maximum of flexibility.

4. Scaling the network

To remove the limitation of running everything on only one network, each instance will create a sub-network that only contains the necessary data to process the execution of this specific instance.

This will increase the scalability of the root network quite a lot, reducing the data required for nodes to execute and validate execution, yet remaining secure through Cosmos SDK and Tendermint.

UPDATE: 14.10.19:

We have been making quick progress, already having completed:

More information: