NuNet architecture

and service discovery principles

April 1, 2021

NuNet deploys SNet services on publicly sourced hardware

We start with:

  • Independent collection of diverse machines / execution environments;
  • Independent collection of SNet AI services / executable containers;

A collection of separate machines

  • Potentially anybody could onboard their hardware onto the platfrom;
  • Different owners / maintainers (community members, physical or juridical persons);
  • Many types of machines / execution environments (PCs, servers, cloud VMs, mobile phones, SBC, XBoxes, robots, etc.);
  • Different capacities (speed, memory, storage);
    • incl. relation to the meatspace (sensors, actuators, etc.);
  • Different owner-defined usage preferences (e.g. nothing to do with cats and dogs!)
  • Connected into a loosely defined decentralized network with a non-uniform uncertain topology;
  • Machine-readible and acessible to the network (with ability to define permissions);
    • i.e. metadata descriptions of execution environments are publicly (?) available;

A collection of separate SNet AI services

  • ‘Serverless’ docker containers registered on the platform;
  • Owned by community members (physical or juridical persons);
  • Distinct functionality described as:
    • well defined and expressed inputs and outputs;
    • somewhat explicitly (?) defined functionality;
    • expicilty defined resource requirements;
    • historic performance metrics / reputation scores;
    • machine-readible and acessible to the network (with ability to define permissions);
  • Funcionalities of SNet AI services could and should overlap – the more the better for marketplace competition;

We want to get to:

  • SNET AI services needed for a specific application are deployed on different and isolated execution environments and send data to each other for seamlesly serving the needs of application:
    • dynamically constructing an application workflow for each application (or potentially for each call);
    • constituting to a hardware + software mesh from the architectural perspective;

Which eventually means:

  • Many-to-many relationship of machines to services for stich together instances of same services / containers for different applications;
  • Ability to scale up/down automatically as required for overall functionality / load balancing requirements of a single application, including:
    • load balancing of multiple user requests;
    • directing and distributing token flows from users to each AI service utilized in an app;
    • dynamically determining hardware prices based on hard and soft constraints;
    • using different devices / execution environtments;
  • Dynamic mathching of soft and hard constraints of individual machine and service declarations for ad-hoc workflow construction;

We do use-case driven development

  • Developing platform in parallel with the fully functional showcase application:

    • Application side: fake-news-detector browser extension with calling a collection
      of AI services running on the platfrom;
    • Platfrom side: implementing all components in application-agnostic way;

Application side:

  • Chromium extension;
    • Activated by user when reading content;
    • Notifies a user about the probability that the content has fake news (currently articles);
  • Backend:
    • Consists of a few SNet AI containers stitched together;
    • Makes sure that the workflow works correctly;

Platform side:

  • Metadata:
    • workflow definition (on the orchestrator machine);
    • service metadata (in registry/service container);
    • machine metadata (on the specific machine);
  • Workflow aggregation code (nunet orchestrator) which constructs the actually working workflow from definitions:
    • searches for deployed and available SNet services;
    • initiates deployment of required SNet AI (if not available);
    • receives input from all parties;
    • aggregates the final output and sends to user/browser extension;
  • This is done by:
    • side-car ‘nunet-adapter’ container (potentially binary) deployed on each machine and taking care of the p2p connections between them;
    • allowing SNet service to communicate only via the adapter;
    • dynamically orchestrating deployment of all of them on available hardware (in serverless style) and connecting dynamic ips an ports together so that they could communicate;

Workflow construction sequence

Example definitions

Questions / comments / notes

  • workflow-definition_example.json seems to be the closest option to what is needed (?);
  • an ontology scheme could be developed for describing structure of these documents (possibly integrating service metadata on the platform) – something in this form:
  • there was a talk about data ontology for ai-dsl and mentions of existing data ontologies in the domain of bio-ai; how can we learn about these?