With MESG, you can build applications that easily connect any technologies together, including any blockchains or Web APIs. Imagine the possibilities of any technology connecting to anything else with data through a decentralized network.
There has never been a solution quite like this before.

Understanding MESG

Since the early days of computers, we’ve never had a fixed definition regarding how services should communicate with each other through a network.

This lack of organization from the past now limits today’s microservice methodology. Microservices brought great value to software ecosystems by showing that services should be kept small and be dedicated to specific functionalities. But it is still lacking on the reusability side, because there still is no standard for how services should communicate with each other through a network.

So far, we haven’t had a chance to create reusable services that don’t necessarily require modifications on their network layers. A service needs to be refactored to be compatible with another communication protocol adopted in the product where it’s being used.

Assume you already have some functional services that you’ve created before for previous projects. Now, let’s say that some of these services expose their functionality to network via http endpoints, and others use gRPC, JSON-RPC, GraphQL or even a custom communication protocol over TCP.

To be able to use these services together in a new product, you must create an environment to support their varying network communication protocols, or refactor them to force a single standard and build a messaging protocol on top of that.

Meanwhile, you have to figure out how to solve service-discovery, load-balancing and security problems to run and scale your services seamlessly.

Instead of just focusing on the features that actually matter for your product, dealing with things that aren’t related become a big waste of development hours!

Quoted from Morpheus

So, why not to define a standardized communication protocol where it opens up the opportunity to create reusable services?

MESG solves this problem by creating an emerging standard for services and introduces workflows for defining how data should flow between the services.

Alongside creating a well-defined network communication protocol for services, MESG even lets your services to run in a decentralized way and manages the network, service discovery, load balancing and security natively with the power of container technology.

MESG’s Core is responsible for running services & workflows in a decentralized way. It runs as a daemon and it can be deployed to any peer in the network for enabling decentralization. Even Core itself consists of a bunch system services which enables the possibility of its own decentralization in the network.

Reusable services aren’t a myth anymore thanks to MESG! You can check out the existing ones on the service marketplace, even create yours for others to use and earn some MESG Tokens along the way! Shh… We’ll talk about blockchains later…

An amazed Doge

If we had had this standardization and service market from the beginning of the internet, we wouldn’t have needed to create services from scratch. Reusing services that easy surely could’ve moved humanity decades forward by preventing all those development hours to be wasted!

Check out mesg.com and the online documentation to get better acquainted with MESG!

Services

Services are small, focused programs that implement target functionalities. To expose these functionalities to the network for other’s use, their capabilities needed to be defined in a standard format.

MESG introduces mesg.yml to exactly meet this need. In this configuration file, you can define the schema of tasks that your service is capable of executing and the type of events that your service may emit. You can also specify dependencies, data volumes, scalability properties, network configurations and other primitives that describe how your service should run.

MESG features event-orientation at its roots. Services’ tasks and events are specially designed with asynchrony in mind first. This fits well with the asynchronous world of services. It lets you to easily create reactive applications that connect any services by using workflows.

Think of tasks as a superset set of RPCs, but MESG is event-oriented, which means a task’s outputs emit asynchronously once its execution is finished.

Events are a new concept that MESG introduces to the service world. They’re very handy to broadcast bits of data that can be useful for other services in the network depending on your application logic. Events can be emitted from inside services as part of their business logic and handled by workflows. Task executions, on the other hand, are performed by workflows on services and their outputs again, are handled by workflows.
As you can see, services can be only connected through workflows and they’re not aware of each other.

Let’s examine the webhook and Discord services to get some ideas:

In the webhook service’s mesg.yml file, we have the request event definition with data and headers as payload.
The request event is emitted on every post call to the /webhook endpoint while the headers and body of each http request is used as its data. You can check the corresponding http handler to see it in code.

In the discord service’s mesg.yml file, we have the send task that takes an email and a sendgridAPIKey as its input data. It has two different outputs as success and error. If sending invitation is successful it’ll emit the success output with code and message output data otherwise error output with an error message. You can see how easy its implementation is in code.

Check the discord-invites workflow shared in the Workflow section below to see these services in action.

Processes

One of the visions of MESG is to allow people, even non-programmers to create applications without writing a single line of code. This is possible with processes by connecting pre-created services from the marketplace. It will also be possible to create processes easily via a user interface where you connect data dots from various services to describe the flow in between them.

Processes can be considered as configurations that describe how data should flow between services.

Basically, processes are used to define conditions for executing tasks. These conditions could depend on different task outputs and events from various services. Services are metaphorically connected with each other through workflows.

You can define patterns to match with task outputs and events from various services that your application needs, manipulate their payload and meta-data and execute a chain of tasks from multiple services in the network by using this data as task inputs.

Processes support advanced data manipulation, filtering and parallel/serial task executions.

Processes are like circuit lines that transfer data between circuit components, but it's for services

Let’s see how a basic process looks like over discord-invites process:

name: discord-invites

description: |
  Send discord invites to your fellows.

  curl -d "email=your@email.com" -XPOST http://localhost:3000/webhook

services:
  webhook: 4f7891f77a6333787075e95b6d3d73ad50b5d1e9
  discord: 1daf16ca98322024824f307a9e11c88e0aba55e2

configs:
  sendgridAPIKey: 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

when:
  webhook:
    event:
      request:
        execute:
          discordExecution:
            map:
              email: $event.data.data.email
              sendgridAPIKey: $configs.sendgridAPIKey
            discord: send

This process shares Discord invites to people by sending emails. We use the webhook and discord services in this workflow. You can read about them in Services section above to know exactly how they work.

Let’s see the when section of a sample workflow. Every declaration about listening results, events and executing tasks should be made under when. On behalf of that, there are some other sections in the workflow definition format to declare a name for the workflow (name), its description (description), map of service names & their IDs (services) and some constants (configuration).

For this discord-invites workflow, there is a definition to listen for request events on the webhook service. So, when this event is received, we access its data with special $event.data variable and access to pre-defined configs in workflow with $configs variable to specify task inputs in the map section. Each field in map corresponds to a field of task’s inputs. In this case we’re mapping data to set email address and SendGrid API key as input data of the send task on discord service.

In conclusion, when a request event is received, the send task will be executed with the dynamically-received email address from the request event’s data and statically-defined SendGrid API key config. This flow will be circulated on every request event until the workflow has been deleted from Core.

You can also execute multiple tasks in series or parallel by adding more named executions under the execute section. It’s even possible to make executions which depend on each other by using the special dependsOn (coming very soon) section on each execution. Accessing the props of root event/result and inheriting task outputs from parent executions are also supported.

Check out the Quick Start guide to run the MESG SDK on your machine, deploy & start webhook and Discord services and run the discord-invites process in just seconds!

Please be aware that some features mentioned blog post may not be fully implemented yet. Stay tuned and follow the latest development via the forum.