
Publication date: June 8, 2025
Over the past five months, as we worked through the detailed technical elaboration, the implementation path has become much clearer and more specific. We were also accepted into the startup program of INNCubator (opens in a new tab), the innovation hub of the Tyrol Chamber of Commerce and the University of Innsbruck, where we receive support to identify potential challenges and to drive the project's progress forward - both on the technical side and in other aspects.
In this article, I want to share the technical progress we've made in recent months and give an update on the evolved technical implementation of the Logos Project.
Since we've noticed through conversations with both technical and non-technical people, that there's often some initial confusion about what exactly we're building, I want to take a moment to clearly summarize the core ideas and technical concepts behind the project.
Recap
At its core, the Logos Project is an initiative to develop a new model for providing, operating and managing physical server infrastructure - one that is transparent, verifiable, and trustless, in contrast to today's hyper scale centralized data centers and Infrastructure-as-a-Service providers.
Based on this model, our goal is to build the Logos Network: a decentralized infrastructure operated by the community, governed by protocol-driven logic, and coordinated democratically through a Decentralized Autonomous Organization (DAO).
In short, we aim to physically decentralize today's data center hardware and, through the Logos Relay Chain and the implementation of trustless protocols, create a secure and autonomous foundation for that infrastructure.
Introduction
Let's begin with one of the key points: our early assumption was that a large portion of the infrastructure could be built using personal devices - such as home PCs or privately owned server hardware. We explained the associated network challenges in the blog post Logos Edge Hubs: Key to Effective Implementation (opens in a new tab), where we outlined how this might work in theory. However, in practical terms, such an approach would require near-perfect conditions, which are rarely found in real-world environments.
Personal gaming PCs and private servers will still remain part of the infrastructure. These kinds of resources can be useful in specific use cases - particularly when network and security demands are moderate. For instance, they can be used for test environments or small-scale web applications or to train AI models, as long as the workloads don't require large hardware capacities or involve highly sensitive data.
In such decentralized environments, a very good level of security can still be maintained - even with home-based setups - but only to a limited extent. If the data being processed is valuable enough to justify the effort of stealing it, a determined attacker might eventually succeed. But the same is true for traditional data centers - if someone is truly determined, they will eventually find a way.
It's important to note, though, that under the current security mechanisms, the effort required to carry out such an attack is quite high. Still, if it were known that highly sensitive data was being processed - which, ideally, should never be the case, since this kind of resource provider shouldn't know what's running on the hardware - that effort might be considered worth it by someone.
For that reason, we don't consider this part of the infrastructure suitable for handling highly critical or sensitive workloads. Still, in many real-world use cases this setup is a practical, secure and acceptable option.
But since our aim is to build an enterprise-grade infrastructure with strong security guarantees, we are developing the Logos Edge Hub. This "mini-rack" is designed so that any attempt to tamper with the hardware is immediately detectable, and access to data can be blocked as quickly as possible.
Before going into the individual challenges and our refined proposed solutions, I also want to briefly mention the Logos Relay Chain at this point, as it will serve as the control and security component of the infrastructure.
Originally, it was planned as a Substrate-based L1 solo chain, but as we realized that many transactions would need to cover various parts of the ecosystem (including the DAO, compensation payouts, automated provisioning and management of the infrastructure itself, and a trustless machine integrity attestation protocol), we shifted toward a relay chain approach with system parachains.
In this architecture, the Logos Relay Chain will function similarly to the current Polkadot Relay Chain, but with some key differences. First, it's dedicated exclusively to the Logos Network and not intended as a general-purpose chain. Second, it will differ in that we will use specialized off-chain daemons that must run alongside the validators and can be slashed if the worker performs incorrect tasks. Moreover, the traditional roles of validators and collators will be effectively merged: instead of assigning fixed roles and responsibilities, their tasks will rotate randomly - meaning that a validator who produces a block on the relay chain in one round might validate a parachain block in another.
The idea behind this design is rooted in efficiency and security. Since the Logos Relay Chain is relatively small, this dynamic role-switching helps strengthen the overall security of the network. If we had to assign dedicated collators for each parachain, we would risk spreading our resources too thin, which could weaken the system as a whole.
So, we have essentially the hardware component (Personal PCs, home Servers and the Logos Edge Hubs) for the infrastructure, as well as the relay chain for security and governance. The next step is to efficiently connect the hardware, ensure the security of the infrastructure, and define the system in a way that allows it to be deployed and managed as Infrastructure as Code (IaC).
This leads us to the three main challenges we face in realizing the vision.
The Main Challenges
The first is network connectivity. Traditional data centers are located at major internet exchange points, giving them access to high-throughput, low-latency connections. Achieving comparable performance in a decentralized setting is not that trivial.
The second major challenge is hardware security. In data centers, physical access is controlled - servers are housed in secured rooms or halls, and only authorized personnel can interact with them. We need alternative mechanisms to ensure the integrity and trustworthiness of hardware that is not locked away in such controlled environments.
The third challenge is automation and provisioning. To truly decentralize infrastructure, we need to automate the entire deployment process in a way that's trustless and transparent - using smart contracts to define and control infrastructure in the spirit of Infrastructure as Code (IaC).
Over the past year of intense research and exploration, our implementation strategies and technical approaches have evolved and expanded several times. In this article, I want to share the current state of the concept and the potential technical execution.
It's also important to note that I won't go into too much depth here, as we're currently working on the Logos Paper. This paper aims to present the full scope and potential of the project in a clear and structured way. It will serve as the foundation for further development by consolidating all relevant aspects, technical feasibility, the economic model and the regulatory approach into a single, comprehensive document.
The Logos Paper is expected to be published in the coming months. This foundation is essential for involving more contributors and enabling productive collaboration as the project progresses.
Network Connectivity
Let's start with what we consider the most challenging aspect: the quality of the infrastructure's network connectivity.
Here, it's important to distinguish between small-scale providers (such as personal PCs or home servers) and those providing Logos Edge Hubs, since the network requirements for these two categories are very different. Small providers need only a symmetric 1 Gbit connection, while Logos Edge Hubs are required to provide a symmetric 10 Gbit connection plus an additional dedicated 1 Gbit line.
In itself, this isn't a fundamental obstacle, as fiber-to-the-home (FTTH) is now widely expanded and continuing to grow rapidly. Outside of Central Europe and a few high-cost regions like the United Arab Emirates, pricing is also economically feasible.
It's important to keep in mind that high throughput in traditional data centers is necessary because a lot of activity is concentrated in a single place and accessed centrally.
In contrast, Logos Edge Hubs are currently planned to be equipped with enough resources to make efficient use of a 10 Gbit connection, with routing logic that allows traffic to be distributed to other Edge Hubs in case of local congestion.
However, the biggest challenge is achieving acceptable latency. In a data center, data is routed to a major internet exchange point and processed very fast. Our Edge Hubs and smaller nodes, however, are not located at such central points. That means we will have to solve this with routing mechanisms, a virtual private low-latency network, and a reverse proxy infrastructure.
The bottom line is that we will not reach the same latency values as centralized data centers. Even with routing nodes positioned at key exchange points, there will still be some latency loss. However, it's important to note that this increase is measured in milliseconds - which is still efficient enough for most use cases, unless extremely low latency is required.
I won't go further into this here, as the network solution is still being developed. We're currently addressing it as part of the Logos Paper in order to define the most efficient approach.
Security
When it comes to security and privacy, we see essentially the same problems in today's data centers. The core issue is the integrity of the servers and hardware on which data is processed in the first place. For example, in most cases, the user whose data is being processed would have no way of knowing if their data was accessed or stolen - whether due to a software vulnerability or physical access to the hardware.
Currently, trust in infrastructure is mostly built on contracts and implicit trust: that no one will look at your data, and that if something happens, the user will be notified. While reporting such incidents is legally required, in reality, this is often circumvented through strict NDAs that prohibit employees from speaking out. History has shown that many data breaches go unreported until they become too obvious to hide - and at that point, companies often receive only a "reasonable" penalty, which resembles more of a parking ticket than a real consequence.
We are currently developing the MIA (Machine Integrity Attestation) protocol, which, in combination with the Logos Edge Hub, aims to provide a secure and verifiable foundation for operating decentralized infrastructures. The protocol is Web3-native and incorporates several layers of technology and approaches. Here's roughly how it will work:
When an Edge Hub boots up, the TPM (Trusted Platform Module) measures the boot chain (BIOS, bootloader, kernel) and updates the PCRs (Platform Configuration Registers). The kernel then activates IMA (Integrity Measurement Architecture) to hash executed binaries and configurations, while AIDE (Advanced Intrusion Detection Environment) checks file integrity against a signed reference database (system parachain). After that, LKRG (Linux Kernel Runtime Guard) is loaded to monitor the kernel in real time for hooking attempts, inline patching, and rootkit attacks. In parallel, eBPF-based (extended Berkeley Packet Filter) sensors analyze system calls, process activity, network behavior, and memory access for suspicious activity. Physical access to the hardware is detected through dedicated sensors.
All these measurements - the aggregated PCR values, a combined IMA digest, the AIDE snapshot hash, LKRG runtime status, summarized eBPF activity data, and the physical tamper flag (which should normally always be 0) - are merged into a single, unique integrity hash, which is stored on one of the system parachains as the current integrity state of the respective machine. This hash can then be verified via remote attestation through a trustless-based verification mechanism. Only if the measured hash matches the registered reference hash is the system considered trustworthy and allowed to run in a production environment.
To handle legitimate changes - such as software updates, configuration changes, or deployments - the MIA protocol includes a verified reference update process. After each authorized change, the entire system is re-measured, including all PCR values, IMA hashes, AIDE checks, and eBPF logs. A new integrity hash is generated and only accepted if the change was made through an authorized, signed deployment process.
This ensures that the system's reference state only changes in a traceable, secure, and verifiable way - without triggering false positives or compromising the reliability of the attestation process.
This topic will be covered in full detail in the upcoming Logos Paper. Please note that the protocol is still in development, and certain elements may change as work progresses.
Automation and Trustless Provisioning
In recent years, the introduction of concepts like Infrastructure as Code (IaC), Continuous Integration (CI), and Continuous Delivery (CD) has made many provisioning processes in infrastructure management more automated and efficient.
The challenge in our context is how to handle these processes in a protocol-based and trustless manner. Our approach is to perform provisioning in a way, that no party has control or influence over the deployment process. For this, a dedicated system parachain is used.
It's important to understand that blockchain networks are deterministic state machines. This means the blockchain only maintains the desired and actual states of the infrastructure - it cannot make external calls or wait for responses from outside systems.
To illustrate this, let's take Terraform (a popular IaC tool) as a reference example. Suppose a user wants to rent an environment - either bare-metal or virtual - from the infrastructure. Since this is an IaaS, we assume predefined machine or VM types to make resource allocation more manageable and efficient.
The user selects a product, and a transaction is initiated that includes all required configuration parameters. On the dedicated system parachain, a chain of smart contracts is used to validate the configuration request. If it meets all requirements, a new state is created on-chain representing the desired infrastructure state - similar to what Terraform would normally execute.
Because we're aiming for automated provisioning, we must ensure that the state defined on-chain is actually realized in practice. To do this, we use off-chain daemons that run on each validator. Their task is to provision infrastructure according to the defined state - similar to how Terraform operates, but strictly limited to executing only what is specified on the blockchain.
These components are crucial, because if they don't perform their work correctly, it creates a potential security vulnerability. To mitigate this, we use a staking mechanism: the daemon is economically incentivized to act correctly. This is similar to the validator slashing principle - but here it applies to an off-chain worker, which must also verify whether the expected state has actually been reached.
Since this verification is non-deterministic, the daemons must cross-check each other. These checks happen off-chain, but their results are recorded and verified on-chain.
This is the basic principle, though the exact implementation may vary depending on the specific use case. It is also not yet fully decided whether the daemons will run directly on the validator nodes - this would allow us to leverage the existing validator stake, making them directly accountable - or whether they should be implemented as separate entities, especially considering performance and scaling concerns.
We're actively working on these questions, involving external experts to define and validate the most robust and efficient implementation path.
Summary and Outlook
To wrap things up, it's important to highlight how the various components we've discussed come together - and what lies ahead for the project.
At the heart of the system is a clear separation between two key components: the Logos Chain (Relay Chain + Parachains), which governs and secures the infrastructure, and the DVCI (Distributed Virtual Computing Infrastructure), which represents the actual computing and storage layer. The Infrastructure is managed and verified through the chain, but is not part of the blockchain network itself.
At the core of the technical concept is enabling the Logos Edge Hubs to operate securely and verifiably through a combination of trustless mechanisms and protocols - following the principle: "Don't trust. Verify."
This means an infrastructure secured through protocol logic, always verifiable and auditable. Trustlessness is a prerequisite for meaningful verifiability. Without it, any claim of verification ultimately relies on trust - and on a centralized verifier.
We replace trust with verifiability, and human discretion with protocol logic.
Currently, we are pursuing a strongly decentralized approach, where physical infrastructure is not concentrated in one location, unlike the model of hyperscale data centers, where tens or hundreds of thousands of servers are placed in a single facility.
The main factor that still influences us here is network performance - especially how low latency and throughput can be achieved in practice. To improve infrastructure performance in use cases where very-low latency is required, we are considering an additional approach: placing smaller data center-like environments (so-called mini data centers) near major network exchange points. These locations could host multiple Logos Edge Hubs and would likely need to be operated by small companies, due to the scale of investment required.
The important difference, however, is that even in those setups, operators would never have unnoticed access to data or infrastructure - any unauthorized access would be immediately detectable. And due to the decentralized nature of the network, protocol rules would ensure that only a limited number of Edge Hubs may operate in a single location, preventing centralization.
From a technical perspective, the location of an Edge Hub doesn't matter: whether it's operated at home or in a larger facility, its security and verifiability are enforced by the Logos Chain. No one - not even the operators - can access data without being immediately detected and slashed.
From a social perspective, we believe it's crucial to involve the community - whether by contributing a small amount of resources or by operating a full Logos Edge Hub. Even in the case of the mini data center model, which may require corporate resources, we explicitly recommend and ultimately aim to support small and independent operators, as verifiable identity and accountability will be required.
The next major technical update will likely be released together with the Logos Paper, which will present all components mentioned here - along with others not here discussed - in a complete and cohesive form. From there, we expect to begin with a formal specification phase and parallel proof-of-concept development.
Have questions or want to dive deeper? Join the conversation on Discord (opens in a new tab)