Streamr network

탈중앙화 데이터 프로토콜

home link https://streamr.network/

reference material

Community

Exchanges that listed the coin
2
Symbol
DATA
Dapp
To be released
Project introduction

Streamr Network는 임무는 실시간 데이터를 위한 분산 인프라를 구축하여 중앙 메시지 브로커를 글로벌 피어 투 피어 네트워크로 대체하는 것입니다.

Executives and partners

Henri Pihkala

CEO

Risto Karjalainen

COO

Nikke Nylund

Co-Founder

MOBI

Fastems

golem

Latest News

There is no news posted at the moment
Please leave a message of support for the team

go to leave a message

Medium

News: Streamr announces a c...

Streamr is excited to announce a collaborative partnership with Tapmydata, a leading mobile app, helping users to discover what information organisations hold about them.As a pioneering Data Union, Tapmydata (Tap) set out with a simple mission: to help people take back control of their data. With Tap’s mobile app the project has broken new ground in the personal data space, helping nearly 10,000 people each day send data access requests and repatriate their data to where it belongs — with users.Streamr has been pioneering the data infrastructure space, allowing organizations that seek to help their members aggregate and monetize their data to do so in the simplest way possible; through well established, decentralized and scalable data messaging and payment networks.Tap has proven that people care about their data and care about who controls it. The project is partnering with Streamr to prepare for the next step; to help users monetize their data and help define what the future of an ethical New Deal for consumer insights will look like.Streamr’s focus on real-time data networks and the recent Data Unions 2.0 upgrade present the perfect opportunity for any project wanting to build out data crowdselling capability and manage delivery of an income stream to members.“Data Unions are a key route for people to get together, get control and realise a Universal Data Income, our mission at Tapmydata. Streamr has led the way in unlocking customer insights for buyers using ethically-sourced data; their framework will help us learn and scale together, at an exciting time for this movement” said Irfon Watkins, Chair at Tapmydata.Together both projects will prepare an integration that will test how one or more data points from the Tapmydata mobile app could be monetized. The current hope is that the fully integrated monetization feature within the Tapmydata app will be released to its 10,000 users by Q3 of this year.“Exploring how to onboard existing and successful organisations to a framework is really the test for infrastructure builders. If we have built our payments and data rails in the right way then already flourishing Data Unions like Tapmydata should be attracted to adopting what we’ve created. So this partnership with Tapmydata is a real moment of validation for the whole Streamr team,” said Henri Pihkala, Co-Founder of the Streamr project.As both projects grow their product offerings, it will be exciting to explore deeper technical integration between Tapmydata’s mobile technology and Streamr’s open-source stack later in the year.Originally published at blog.streamr.network on April 15, 2021.News: Streamr announces a collaborative partnership with Tapmydata was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 04. 15

Welcome to the February/Mar...

The Streamr team made a herculean effort over March to release the Data Unions Framework upgrade, which is now live on xDai. While the launch had our focus, it didn’t stop the grassroots ecosystem developments happening at the network layer. Here are the highlights from the last two months:Data Unions 2.0Almost everything under the hood has been rebuilt in this iteration of the framework. Arguably the biggest change is that the Data Union treasuries, which hold yet to be allocated funds on each Data Union smart contract, are now located on xDai, rather than the Ethereum mainnet. This means that member earnings are no longer held hostage on the mainnet, since withdrawing to the sidechain costs fractions of a cent.We couldn’t have brought Data Unions to xDai without also bringing the DATA token along with it. From here on, DATA is now a multichain asset. You can use the xDai Omnibridge to bridge your mainnet DATA tokens to xDai, or you can get DATA on Honeyswap via a direct fiat injection into the xDai ecosystem with Ramp. We established a DATA/xDai pool and added a large amount of seed liquidity to provide a good experience on the platform.Bringing Data Unions directly on-chain will lead to more composability with DeFi legos, like API3 and various AMMs. A bridge between Data Unions and Binance is also in development — this will allow for fast and cheap withdrawals directly to your Binance deposit address.Swash is currently working on its migration to 2.0 and there are new Data Unions in the works that will grow up very quickly!Stream RegistryCurrently the registry of stream metadata and permissions are stored in a centralized database, which needs to be queried by network users for message validation. Our objective is to store this registry on-chain for better security, as well as to enable anyone to read its contents in a decentralized way. This project is led by our new hire, Sam Pitter. Sam is a senior smart contract developer based out of Germany. He’ll be working exclusively on this task and has already made great progress. The Stream registry will be created and managed initially on the xDai chain for its low fees.If-This-Then StreamrStreamr now has an If-This-Then-That integration! You can now stream data from 600+ brands, services and IoT devices, available on IFTTT to Streamr — just mix data ingredients to build and push stream data points. This means that real-time data on Streamr can act as a data input trigger to any other API on the platform to generate powerful logic and feedback loops.For example, you could connect Spotify and Streamr together with drag and drop tools — every time you play a song, the song name would be pushed into a stream. It works the other way around too. Every time a new data point arrives on a stream, you could write it to a Google sheet. Or if you have a stream of photo URLs, they could be published to a photo hosting service, in real-time. The possibilities are endless. Here is a guide to help you get started.Streamr Dashboards go GrafanaStreamr is now an official community data source of Grafana.Check out the Streamr plugin on Grafana, and if you’re new to Grafana, here is a great getting started guide to get your streams visualised.Token migrationWe are reaching out to exchanges, wallets and aggregators for their support in the upcoming token migration, along with preparing the support documentation for users. We will be building a token migration tool and expect exchanges to support the migration natively. We are also researching the different token standards, especially the pros and cons of adopting the ERC777 standard (completely backwards compatible with ERC20).Tokenomics and the Streamr NetworkMeasuring the performance and health of the network has been the focus of the network team as of late and especially now that the Network Explorer allows us to inspect how topologies are formed in a much more visual way. We’ve also exposed latency metrics per stream topology to better understand the emergent performance of the network at the lowest levels.Network throughput is being looked at quite closely, specifically the tradeoffs between WebRTC and websockets. WebRTC is a complex technology that behaves differently to our websocket implementation under heavy loads. We’re tinkering at the very heart of the WebRTC protocol and pushing it to its limits, we look forward to sharing the results of our progress as we get closer to the Brubeck milestone.Our tokenomics research is in Phase 3, which has so far included work on formally defining an “MVP” of the Agreement (which is the thing that connects Payers to Brokers, i.e. connects demand and supply in the network). So in Phase 3 we’re starting to get into the really important bits, which is very exciting. Next, the initial Agreement model will be implemented in cadCAD, and then we’ll be able to simulate the behavior and earnings of Brokers (node operators) under various assumptions about how supply and demand enters the system, and for example how the initial growth of the network can be accelerated via potential use of additional reward tokens (a bit like yield farming in DeFi).The full tokenomics isn’t planned until the Tatum milestone which is due next year, however this year we could already have additional token incentives to help kickstart the decentralization of the network once we reach the Brubeck milestone.Streamr ClientsWe’ve pushed a major version update to the client — version 5 is now released. The update contains over 500 commits and brings compatibility with Data Unions 2.0. Only version 5 and up will be compatible with Data Unions from here on. The Java client has also been updated to be compatible with Data Unions 2.0.This update is also about laying the foundations for moving logic from the client layer into the broker layer. This decoupling will mean that the JS client will import the Network. This means that clients will be ‘light’ in that the heavy logic, such as message ordering and gap filling, will be handled once — in the imported Network package. When this architecture is achieved we will have taken another big step towards the Brubeck milestone.Deprecations and Breaking ChangesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.On May 31st 2021, the API endpoint to create a stream — /streams POST will no longer automatically generate an ID if it is not supplied in the request. This means that a valid stream ID changes from optional to required. The Streamr clients will be updated in April with this breaking change.On April 31st 2021, the Canvas and Dashboard features will be removed from the Streamr Core application and the associated API. This was decided by the DATA token holders through a governance vote. You will still be able to create and manage streams, data products, and Data Unions as usual after this change. If you don’t currently use the Canvas or Dashboard features of the Core application, the change won’t affect you and you won’t notice any difference.The code will be archived into a fork for safekeeping and potential later use. An example of later use could be to relaunch the Canvas tooling at a later time as a self-hosted version which would connect to the decentralized Streamr Network for data.This notice period gives you time to migrate any of your Canvas-based stream processing workloads to other tools. We in the Streamr team are using a few Canvases ourselves for computing metrics, such as the real-time messages/second metric you see on the project website. It’s pretty straightforward to replace those Canvases with simple node.js scripts that compute the same results and leverage the Streamr JS library, and this is exactly what we intend to do for the few Canvas-based workloads we have internally.(Date TBD): Support for unsigned data will be droppedUnsigned data on the network is not compatible with the goal of decentralization, because malicious nodes can tamper with data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication.Originally published at blog.streamr.network on April 9, 2021.Welcome to the February/March Dev Update! was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 04. 09

Collecting metrics data fro...

All decentralized networks, including blockchains and other P2P systems face the technical problem of how to gather metrics and statistics from nodes run by many different parties. Achieving this isn’t exactly trivial, and there are no established best practices.We faced this problem ourselves while building the Streamr Network, and actually ended up using the Network itself to solve it! As collecting metrics is a common need in the cryptosphere, in this blog I will outline the problem as well as describe the practical solution we ended up with, hoping it will help other dev teams in the space.The problem with gathering metricsGetting detailed real-time information about the state of nodes in your network is incredibly useful. It allows developers to detect and diagnose problems, and helps publicly showcase what’s going on in your network by building network explorers, status pages and the like. In typical blockchain networks, you can of course listen in on the broadcasted transactions to build block explorers and other views of the ledger itself, but getting more fine-grained and lower-level data — like CPU and memory consumption of nodes, disk and network i/o, number of peer connections and error counts etc — needs a separate solution.One simple approach is that the dev team sets up an HTTP server with an endpoint for receiving data from nodes. The address of this endpoint is then hard-coded to the node implementation, and the nodes are programmed to regularly submit metrics to this endpoint. However, authentication can’t really be used here, because decentralized networks are open and permissionless, and you won’t know who will be running nodes in order to distribute credentials to those parties. Exposing an endpoint to which anyone can write data is a bad idea, because it’s very vulnerable to abuse, spoofing of information, and DDoS attacks.Another approach is to have each node store metrics data locally and expose it publicly via a read-only API. Then, a separate aggregator script run by the dev team can connect to each node and query the information to get a picture of the whole network. However, this won’t really work if the nodes are behind firewalls, which is usually the case. The solution also scales badly, because in large networks with thousands of nodes, the aggregator script is easily overwhelmed trying to query the data frequently from each node.Both the “push” and “pull” approaches outlined above can be refined and improved to mitigate their inherent shortcomings. For example, originally built for monitoring Substrate chains, Gantree first stores data locally and then uses a watchdog process to sync metrics to the cloud. To avoid the problem of a publicly writable endpoint, node operators need to sign up to the service and obtain an API key to be able to contribute metrics.However, a fully decentralized approach is certainly possible, which decouples the data producer and data consumer, requires no explicit sign-up, and leverages a decentralized network and protocol for message transport.RequirementsLet’s list some requirements for a more solid metrics collection architecture for decentralized networks and protocols:Metrics collection features shouldn’t increase the attack surface of nodesMetrics sharing should work across firewalls without any pre-arrangements; metrics consumers should not need to directly connect to the nodesThe solution should scale to any number of nodes and metrics consumersMetrics data should be signed, validateable, and attributable to the node that produced itMetrics data should be equally accessible by everyoneIt should be possible to query historical metrics for different timeframesContributing metrics should work out of the box without node runners having to sign up to any service.The solution, part 1: node-specific data via node-specific streamsThe solution is based on a decentralized pub/sub messaging protocol (in my example, Streamr) to fully decouple the metrics-producing nodes from the metrics consumers. Nodes make data available via topics following a standardized naming convention, and metrics consumers pick-and-mix what they need by subscribing to the topics they want. In the Streamr protocol, topics are called streams and their names follow a structure similar to URLs:domain/pathThe path part is arbitrarily chosen by the creator of the stream, while domain is a controlled namespace where streams can be created only if you own the domain. In Streamr, identities are derived from Ethereum key pairs and domain names are ENS names. If your network uses different cryptographic keys, you can still derive an Ethereum key pair from the keys in your network, or generate Ethereum keys for the purpose of metrics.In our own metrics use case, i.e. to gather metrics from the Streamr Network itself, each node publishes metrics data to a number of predefined paths under a domain they automatically own by virtue of their Ethereum address:<address>/streamr/node/metrics/sec<address>/streamr/node/metrics/min <address>/streamr/node/metrics/hour <address>/streamr/node/metrics/dayThe node publishes data at different frequencies to these four streams. The sec stream contains high-frequency metrics updated every few seconds, while the day stream contains one aggregate data point per day. The different streams are there to serve different timeframes of inspection; a label showing the realtime value would subscribe to the sec stream, while a chart showing the value of a metric for one year would query historical data from the day stream. It’s important to activate storage for the streams, especially the min, hour, and day ones, to enable historical data to be retrieved.Data points in the streams are just JSON objects, allowing for any interesting metrics to be communicated:{ "cpu_load_pct": 0.65, "mem_usage_bytes": 175028133, "peer_connections": 39, "bandwidth_in_bytes_per_sec": 163021, "bandwidth_out_bytes_per_sec": 371251, ... }Additionally, each data point is cryptographically signed, allowing any consumer to validate that the message is intact and originates from the said node. End-to-end encryption is not used here, as the metrics data is intended to be public in our use case.The solution, part 2: computing network-level dataWith the above, node-specific metrics streams can now be obtained by anyone to power node-specific views, and various aggregate results can also be computed from them. For people in the Streamr community, including those of us working on developing the protocol, aggregate data about the Streamr Network is very interesting.To compute network-wide metrics and publish them as aggregate streams, an aggregator script is used. The script subscribes to each per-node metrics stream, which it finds and identifies by the predefined naming pattern, computes averages or sums for each metric across all nodes, and publishes the results to four streams:streamr.eth/metrics/network/sec streamr.eth/metrics/network/min streamr.eth/metrics/network/hour streamr.eth/metrics/network/dayThe different timeframes seen here serve a similar purpose as the timeframes seen in the per-node metrics streams. Note that these streams exist under the streamr.eth ENS name as the domain, making the names of these streams more human-readable and indicating they are created by the Streamr team.The metrics streams might get used in many ways; a network explorer dapp could display the data to users in real-time with the help of the Streamr library for JS, or it could be connected to dashboards with the Streamr data source plugin for Grafana. Perhaps some people will even use this type of metrics data to make trading decisions regarding your network’s native token.ConclusionWe were able to solve our own metrics collection problem using the Streamr Network and protocol, so a similar approach might come in handy for other projects too. Most of the developer tooling in the crypto space is still new and immature; many problems in decentralized devops including metrics and monitoring are missing proper solutions. I hope this post helps outline some best practices, gives an example of how to model the metrics streams, and shows how to derive network-wide aggregate metrics from the per-node streams.Sharing metrics data in a decentralized setting is always optional, because each node is fully controlled by the person who runs it. To make decentralized metrics collection even more sophisticated, the Data Unions framework can be used to incentivise node operators to share metrics. However, that’s a topic we can explore in another blog post. Until next time!Originally published at blog.streamr.network on April 6, 2021.Collecting metrics data from decentralized systems was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 04. 06

Helium & Streamr: An End-to...

Picture this. You’ve deployed coverage on the Helium Network using one of the approved Hotspots. As is common for members of The People’s Network, you quickly realize that deploying sensors and conferring utility on the coverage you’ve created is the next best step. You decide to keep it simple and start capturing temperature and humidity data in your neighborhood. The LHT65 from Dragino is an excellent option for this. You start slow, but before you know it you’ve got an entire fleet of sensors transmitting real-time temperature and humidity data. And to make use of all this, you’ve built a simple application for capturing hyper-local environmental data monitoring in your neighborhood. Life is good. But could it be better? Yes.Enter Streamr. Streamr is a decentralized platform for real-time data. In short, the Streamr protocol lets you transport, broadcast, and monetize data. There is a massive market for well-organized, specific data — with IoT data arguably being the biggest type. In addition to the base Streamr protocol, they’ve built Data Unions — a higher-level framework that enables large groups of people to productize and find markets for data they produce together. And all transactions are facilitated by $DATA, Streamr’s ERC-20 compatible token that makes settlements and distribution of data streams simple and decentralized.Recently the team at Streamr put together a simple, powerful demo to showcase the potential of marrying Helium, the world’s largest and fastest-growing LoRaWAN network, with Streamr. What this means is that now you can distribute and monetize that hyper-local environmental data, and potentially endless other data streams you could produce on the Helium Network.Connecting Data from Helium to StreamrHere’s how it works. In a nutshell, data from sensors deployed to the Helium Network is piped to the Streamr Network via Helium Console’s MQTT integration and the MQTT interface on Streamr nodes.Immediately after data flows into the Streamr Network, all the options in the ecosystem become available, such as broadcasting the data to applications, packaging it as a data product on the Marketplace, or joining a Data Union to package and sell the data with other, similar data. Let’s take a look at how to configure the integration in practice.Helium and Streamr ArchitectureFirst, you need a sensor within range of the Helium Network. It will be able to join the network and show up in the Helium Console. In the demo, we had an LHT65 LoRaWAN Temperature & Humidity Sensor connected to the Ambitious Ocean Panda Hotspot located in Helsinki, Finland.Here’s what our LHT65 looks like when it’s onboarded to the Network.LHT65 as seen in Helium ConsoleAnd shortly after, here’s what it looks like when data flows in:Live packets in the Helium ConsoleOnce connected, you’ll also need to run a piece of software to bridge data to/from the Streamr Network. At the moment, you need the helium-mqtt-adapter which will bridge incoming MQTT data to Streamr nodes run by others. Later this year you’ll be able to actually run your own Streamr node instead, which will ship with an MQTT interface also suitable for this setup.Once you have your device and the adapter up and running, you’re ready to create a stream in Streamr Core. On your way there, you’ll need an Ethereum wallet like MetaMask, as this will provide you with an identity in both the Streamr and Ethereum networks. The stream you create will be the stream that contains your data once the integration is complete. In the stream settings, you can also enable storage to keep a history of the data.The integration will leverage the off-the-shelf MQTT integration available in the Helium Console, so the next step is to go to Integrations and add MQTT. Configure the integration as follows:Integration name: “My Streamr node” (or whatever you want)Endpoint: an URL pointing to the IP address where you’re running the helium-mqtt-adapter. For example mqtt://username:password@1.2.3.4. Here, username and password are environment variables you set earlier when configuring the helium-mqtt-adapter to secure it.Uplink Topic: copy-paste here the ID of the stream you created in Streamr Core, for example 0x…/helium/lht65.Downlink Topic: paste here the same stream ID as above.Streamr MQTT IntegrationFinally, in the Helium Console, create a Label and add your Device and Integration to it. Here’s an example:Sensor Label in Helium ConsoleAs soon as the sensor sends new measurements, the data points will flow to the Streamr Network, appearing in real-time in the stream inspector, as shown below:If things are working so far, you’re basically done! One additional recommendation though: the sensor is sending its readings in encoded form by default, so to make your data easier to consume, you probably want to convert those values to human-readable form. To do this, you can add a Function in the Helium Console, and apply it to the Label you created earlier. The exact code of the Function depends on what sensor you’re using, but here’s the code for the LHT65 sensor we used in the demo.Opportunities in the Streamr ecosystemNow that your data stream is connected to Streamr, you can use the whole ecosystem to your advantage and make your data go further. You can, for example, connect the data to web apps in real-time using the client library for JS, plug it into Grafana for visualizations, connect to IFTTT for automation using community-built tools, or connect the data to smart contracts with Chainlink and soon API3.For data monetization, you can wrap your stream(s) into a product and sell it on the Streamr Marketplace. Other people will then be able to pay you a reward of your choosing for continuous access to your streams. If the data from your devices alone is not enough to make a compelling product, you can join (or start!) a Data Union, a framework that allows you to join forces with others producing similar data, and “crowdsell” it together using a revenue sharing model implemented by Data Unions.By realizing this integration, both Helium and Streamr can provide their users with a fully decentralized and trustless global data infrastructure for a “first-to-last-mile” IoT pipeline. Users gain benefits from the network effects of composable ecosystems, and no longer need to accept vendor lock-in or privacy issues inherent with centralized cloud services in order to connect their data to applications and to data consumers. Instead, users can leverage networks made for, and operated by, the people. They stay in control of the data they’re producing, and they gain the opportunity to participate in the emerging data economy.Originally published at blog.streamr.network on March 29, 2021.Helium & Streamr: An End-to-End Pipeline for Connecting, Delivering, and Monetizing IoT Data was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 29

What the EU’s Digital Marke...

New EU regulation will make real-time data portability mandatory for big platform operators. GAFA (Google, Amazon, Facebook, Apple) will have to give both business users and end users the right to port their data in realtime through APIs — third parties will be also allowed to do this, on behalf of business users at least. This means that Data Unions can get access to data directly at the source on behalf of their members.What is the Digital Markets Act (DMA)? The ​DMA​ is the second part of three proposals from the European Commission and was published on the 15th of December 2020, alongside the Digital Services Act, and following on from the publication of the Data Governance Act. All three are now in the proposal stage before heading to the EU parliament.The DMA is solely directed at regulating “Gatekeepers” — big platform operators with significant control over areas of business and end users. Gatekeepers are defined in the following qualitative and quantitative ways:They provide ‘core platform services’ which include:online intermediation services (for example marketplaces or app stores)online search enginessocial networkingvideo sharing platform servicesoperating systemscloud servicesadvertising services.So this covers platforms like Youtube, Facebook, Apple Music and Amazon for sure. It’s hard to say who else might fall into this category. Further requirements include an annual EEA turnover equal to or above 6.5 billion EUR, as well as 45 million monthly active end users established or located in the Union.What does this mean for Data Unions?When it comes to Data Unions, the section is small, but incredibly powerful. The act proposes real time portability rights! And specifically mandating third parties to utilise those rights. This would open up the doors for a myriad of Data Unions to be built on top of the above-mentioned gatekeeper platforms, using the real-time API access. In the proposed act it says:“Gatekeepers benefit from access to vast amounts of data […] to ensure that gatekeepers do not undermine the contestability of core platform services as well as the innovation potential of the dynamic digital sector […] business users and end users should be granted effective and immediate access to the data they provided or generated. It should also be ensured that business users and end users can port that data in real time effectively, such as for example through high quality application programming interfaces ​.The gatekeeper should, upon their request, allow ​ unhindered access, free of charge, to such data​. Such access should also be given to third parties contracted by the business user ​, who are acting as processors of this data for the business user.”In the next step of the legislation, the European Parliament and member states will debate the Commission’s proposals. However, according to experts, the legislative process will continue for several years.Originally published at blog.streamr.network on March 22, 2021.What the EU’s Digital Markets Act means for Data Unions was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 22

Get to know the winners of ...

Get to know the winners of the Streamr Data Challenge — Vidhira by AathmanirbharIf you’ve been following the Streamr Data Challenge, you’ll know that we brought together a talented group of individuals who participated in perpetuating the need for decentralized data economies through Streamr. In the course of four months, the Streamr Data Challenge saw 217 Team registrations from developers — these projects were built across various industries like healthcare, supply chain & logistics, agro-tech, fin-tech, ed-tech and more.However, on Demo Day, we provided a platform for the Top 5 projects of the Streamr Data Challenge to make their pitches: Team Binary, Geeks Squad, Radixolabs, Hack Inversion and Aatmanirbhar. After some enthusiastic pitch sessions, we declared team Aathmanirbhar the champions of the Streamr Data Challenge for their project Vidhira, a decentralized web application designed for content creators to showcase their work and prevent plagiarism.https://medium.com/media/428b93f9ec86068754b81a5206f6e8c3/hrefIn this blog post, we’re introducing the Aathmanirbhar team and giving you a chance to get to know them better. Let’s dive right in.Tell us more about Aathmanirbhar and how this team came togetherTeam Aathmanirbhar is a four-member team from the Shri Ramdeobaba College of Engineering and Management.We’re classmates and hang out quite often at college. After the pandemic hit and the lockdown was imposed, we didn’t get the opportunity to meet, but we did connect to share ideas, start pondering upon them and build stuff together. Once we heard about the Streamr Data Challenge, we decided to grab this opportunity and take part in the hackathon.What is Vidhira?Vidhira is a decentralized web application built on the Matic Network, designed for content creators who want to showcase their work without having to worry about their content being plagiarized. Vidhira is actually a Sanskrit word meaning “Of the creator”.Tell us more about the problems that Vidhira tackles for the average content creatorWith Vidhira, we are trying to address the following problems:Traditional social media platforms do not have a mechanism to publicly authenticate an item as one of a kind, thus content is being endlessly reproduced‎The internet and related technologies have created an expectation among some audiences that all digital content should be free‎Data breaching is another issue that is hampering the work of content creators.‎The internet is shifting the focus of many content creators from artistic creation and curation to promotion and marketing to earn. So, we want to encourage the creation of content without shifting their process.How are you solving this problem?Whenever a user uploads a post (any piece of content) on Vidhira, the hash value of the image is available on the data union stream. We are providing a unique id and a hash value for any piece of content that is being posted in our web application by the user, which will help in keeping proper track of the content.If another user tries to download the image from the original content creator’s account and report on his account then the image will be reposted, but with content credits of the user who owns the original content. Also, the duplicate image is not uploaded on the stream thereby providing only the original and authenticated data on the market place. Whenever a post that was reposted is liked by a random viewer, the like would also get credited into the original creator’s post thus benefiting and encouraging the original content creator.Vidhira rewards the users with ETH for posting their content on this platform. Moreover, being a decentralized social media platform, it enables end-to-end encryptions for every interaction.On Vidhira, users can create accounts without having to link to real-world identities, like email addresses or phone numbers. All the user needs is a crypto wallet like Metamask which relies on public-key cryptography for account security, rather than relying on a single organisation to protect user data.What inspired you to build Vidhira?People have a tendency to ignore things until they actually hit them. The same being the case with us; one of our friend’s artistic content was plagiarised and got reposted by another account without content credits to our friend. Surprisingly, the copied image was getting more views than that of the original content creator i.e more views than on our friend’s post.This incident inspired us to create ‘Vidhira’ so that the original content creator is recognized and benefitted. There are many other creators whose work is plagiarised but they have no solution for it. By creating this web application, we serve the purpose of helping them.As winners of the $5,000 grant, what are the next steps for Vidhira?At this initial phase, we only presented our idea, the basic implementation and how our web application is going to work. After the acclaim our project received, we are looking forward to making Vidhira a more user-friendly decentralized web application. We are also looking to introduce some more things in Vidhira so that it reaches the list of top social media apps and attracts more users to the platform.What value do you think Data Unions add to projects like Vidhira?The most important thing that was lagging in our project was helping the content creators earn something out of their hard work and this was eventually fulfilled by Streamr. Data Unions provide a way to bundle a user’s real-time data together with others and distribute a share of the revenue when someone pays to access it.In Vidhira, we provide Data Unions for images and captions. Images are something that every organisation requires, and companies spend a lot of time and money on them. For example, for presentations or to include them in their websites. Vidhira allows users to showcase their artistic creations or photography skills, and then send it in real-time to the Streamr Marketplace, to a special Data Union, where buyers can purchase the aggregated image hash values and then retrieve the original images from IPFS using the hash value.Vidhira does not send user credentials, thus protecting the user’s privacy. As a Data Union administrator, Vidhira takes a small percentage of all sales of the images. Vidhira would emerge as a high potential use case that uses Streamr’s tech stack to reward users for their content that opens up a whole new portal of innovation and future scope.How was your experience integrating with Streamr to set up Vidhira’s Data Union?The documentation we were provided with was quite helpful for us to integrate into Streamr. We encountered some challenging errors which took time to get fixed but the documentation provided was concise and simple to work with.How did you like participating in the Streamr Data Challenge?Participating in the Streamr Data Challenge was a good experience as we all got to know about the concepts of Data Unions for the first time. Learning more about Data Unions while hands-on, making a project, helped us acquire the concepts quite strongly. And since it was our first hackathon it was a good start on our hackathon journey.Describe your learning experience with the Streamr Data ChallengeThere were quite many things that we learned along the way. It started from integrating blockchain smart contracts with ReactJs, then learning about the Matic Network, and then integrating our app with the Streamr Marketplace. All these things had some challenges with them, but we as a team worked hard, so these challenges seemed quite smooth after a certain period of time.What were your biggest takeaways from the Data Challenge?There are many learnings that we took from this challenge.Since this was our first hackathon, this challenge gave us a feel for how we have to exercise patience in a hackathon. We also worked in a team for the first time and it was a great experience for all of us because we helped each other when it came to understanding some concepts and learning new things. We also got to interact with some amazing people during our demo session. Discussing our project with them and getting feedback from them was quite amazing.Kudos, Team Aathmanirbhar! You did some great work.Vidhira was one of 116 projects that were selected from a pool of 200+ registrations that had viable Data Union use cases. Their story speaks of grit, teamwork, and inspiration that set out to change the way content creators can combat plagiarism. This led them to win a $5,000 (USD) grant from Streamr to get Vidhira in front of those who are working hard to promote the content they create. We’re proud of all the other teams who put their best foot forward to showcase the power of decentralized data economies.Originally published at blog.streamr.network on March 19, 2021.Get to know the winners of the Streamr Data Challenge — Vidhira by Aathmanirbhar was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 19

The Data Unions upgrade is ...

The Data Unions upgrade is live — DATA token goes multi-chain with xDaiThe Data Unions 2.0 upgrade is now live on Ethereum mainnet and xDai chain, and all new Data Unions deployed via the Streamr Core app will use the new architecture automatically. The upgrade brings security and robustness improvements to Data Unions, adds new features, and enables composability with DeFi. The upgrade also means that the Streamr ecosystem and DATA token now span over multiple blockchains! The Streamr team will work with existing Data Union builders to help them upgrade to the improved platform.For those of you learning about Data Unions for the first time — the Data Union framework is a data crowdsourcing/crowdselling solution built as part of the Streamr project. Working in tandem with the Streamr Network and Ethereum, the framework powers apps that enable people to earn by sharing valuable data. Learn more about it here!What’s new?Under the hood, everything has changed. The previous-generation Data Unions framework used the Monoplasma off-chain solution for scalability, while Data Unions 2.0 is fully on-chain and implemented as smart contracts. While the gas costs on Ethereum mainnet are currently sky-high, Data Unions 2.0 can run on any Ethereum-compatible sidechain for scalability and affordable cost of operation. The first supported sidechain is xDai, and other options may be added in the future.The benefits of having Data Unions fully on-chain are many. First of all, the security of the system becomes as strong as the security of the underlying blockchain itself. Secondly, Data Union members and builders get to benefit from composability, network effects, and services available on the same blockchain, particularly DeFi platforms.Previously, Data Union members needed to withdraw their earned tokens to Ethereum mainnet in order to do anything with them. With current gas prices on mainnet, this has become unfeasible, and tokens are being kept “hostage” by high transaction fees. With Data Unions 2.0, tokens can optionally be withdrawn and used on the sidechain — for example swapped to some other tokens using a DEX present on that sidechain. The transaction fees on sidechains are orders of magnitude cheaper than on Ethereum mainnet, making it much more feasible to move around even small amounts of value. On xDai, transactions currently cost only fractions of a cent, compared to transactions costing tens or even hundreds of dollars on mainnet.In addition to the architectural changes, the upgrade also comes with some new features requested by builders. Data Unions now support gasless withdrawals via metatransactions. This means that Data Union admins can optionally pay the transaction fees on behalf of members, potentially enabling a better user experience. Another new feature enables tokens to be directly deposited to a particular user’s balance in the Data Union, allowing Data Union builders to (for example) run referral campaigns that pay out personalised rewards (as opposed to the equal revenue sharing among members, the core feature of Data Unions).The Data Unions 2.0 architectureData Unions now live on a chosen sidechain. Information about its members is maintained in the state of the smart contract, and tokens flow in and out of the Data Union smart contract via normal transfers of ERC-20 tokens. As the DATA token originates from the Ethereum mainnet, a bridge such as the xDai OmniBridge is used to move tokens between chains.Wallets and contracts on Ethereum mainnet can still interact with Data Unions over the bridge. For example, the Streamr Marketplace smart contract can transfer tokens into a Data Union then the data product is purchased. The tokens automatically cross the bridge to the sidechain before being deposited as revenue into the Data Union smart contract.For those of you interested in seeing more of the technical details, have a look at this Miro board as well as the DU2 smart contracts repo on GitHub.Towards cheaper gas costsObviously, the extreme gas costs on Ethereum mainnet can only be worked around by moving away from mainnet. Data Unions 2.0 is a first step in that direction, enabling many Data Union members to use, trade, or transfer their earned tokens without ever bridging them to mainnet. A growing amount of DeFi platforms and liquidity is available on sidechains, and in the future tokens can even be bridged between sidechains to access additional services or liquidity. For example, the xDai team has deployed a beta version of a bridge between xDai and Binance Smart Chain, allowing tokens to be withdrawn from xDai Data Unions even to Binance Smart Chain without going through the expensive Ethereum mainnet. As sidechain and bridge technologies are developing very quickly at the moment, we expect to see many new possibilities enabled over the course of this year.That being said, any transactions on the mainnet, including bridging tokens to sidechains, are still expensive. Parts of the Streamr ecosystem, in particular the Marketplace, are still on mainnet, and purchasing access to data products can cost tens of dollars in fees, often more than the price of the data subscription itself. This severely limits the usefulness of the Marketplace at the moment, which is also the case with most other dapps on Ethereum. While Data Union teams can fall back to more traditional interactions with potential data buyers as a temporary solution, it’s clear that the Marketplace along with other smart contracts in the Streamr ecosystem will eventually need to migrate to sidechains, and the team will continue working towards this goal to completely solve the issue of unacceptable transaction fees.Why choose xDai for Data Unions?We looked into xDai, Matic/Polygon, Binance Smart Chain, and Avalanche as the potential first homes for the new release. While all of them are very interesting, the xDai chain ticked almost all the boxes for this use case, making it a great chain to kick off with:Low enough transaction fees to make Data Union member management and withdrawals feasible even for large Data Unions with hundreds of thousands of members. Swash, the first and biggest Data Union, already has over 14,000 members, and it’s likely that much larger Data Unions will exist in the future.A robust Arbitrary Message Bridge (AMB) implementation that is extensible enough to support ERC-677 callbacks when the recipient address on sidechain is a smart contract. The bridge also passes transactions quickly (in minutes).Transaction fees are more predictable thanks to using a stablecoin as the native token.The Graph indexes the xDai chain.Ramp offers direct fiat-to-xDai conversions.The team is responsive and welcomes the use case.From the perspective of the xDai chain, the Streamr Data Unions framework brings 14,000+ Data Union members as well as a handful of new builder teams in touch with the chain.To better service Data Union admins, members, and other stakeholders of the Streamr ecosystem, we are looking forward to seeing more DeFi platforms and liquidity moving to xDai in the future. As the sidechain space and especially bridges evolve over time, we’ll be keeping a close eye on other suitable chains to support. In the future, Data Unions could exist on multiple sidechains, and the preferred one would be chosen when the Data Union is created.Manually bridging DATA to/from xDaiUsers with DATA tokens in their wallets can bridge them to/from the xDai chain using the xDai OmniBridge UI, which supports Metamask and WalletConnect-enabled wallets.To see and use tokens in your wallet on xDai chain, configure xDai chain using these instructions, switch to that, and then add a custom token 0xE4a2620edE1058D61BEe5F45F6414314fdf10548, which is the DATA smart contract on xDai.To make transactions on the xDai chain, you’ll need some xDai for gas. You can buy some via Ramp, or transfer DAI from mainnet using the bridge.SDK versions compatible with Data Unions 2.0JS SDK: version 5.0 onwards. It should be noted that 5.0 and newer versions are not backwards compatible, i.e. they only support Data Unions 2.0, while 4.x and older versions only work with Data Unions 1.0.Java SDK: version 2.1 onwards. Older versions did not implement Data Union interactions at all.Migrating first-generation Data Unions to 2.0The Streamr team will get in touch with everyone running Data Unions on the old platform, and help them upgrade. Using a utility script, a mirror copy of the old Data Union will be created on Data Unions 2.0, and the Streamr team will provide the token liquidity to create and maintain the copy. Then, the end-user application for the Data Union will be updated to use the new version of the SDK and pointed to interact with the new Data Union instead of the existing Data Union. Eventually, once all migrations are complete, the old Data Union framework will be shut down.Join the Streamr developer ecosystemThe best place to learn about building Data Unions and working with Streamr open source technologies is the docs. To get in touch with our team for support, make sure to join our Discord server. See you there!Originally published at blog.streamr.network on March 17, 2021.The Data Unions upgrade is live — DATA token goes multi-chain with xDai was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 17

The Data Unions upgrade is ...

The Data Unions upgrade is live — DATA token goes multi-chain with xDaiThe Data Unions 2.0 upgrade is now live on Ethereum mainnet and xDai chain, and all new Data Unions deployed via the Streamr Core app will use the new architecture automatically. The upgrade brings security and robustness improvements to Data Unions, adds new features, and enables composability with DeFi. The upgrade also means that the Streamr ecosystem and DATA token now span over multiple blockchains! The Streamr team will work with existing Data Union builders to help them upgrade to the improved platform.For those of you learning about Data Unions for the first time — the Data Union framework is a data crowdsourcing/crowdselling solution built as part of the Streamr project. Working in tandem with the Streamr Network and Ethereum, the framework powers apps that enable people to earn by sharing valuable data. Learn more about it here!What’s new?Under the hood, everything has changed. The previous-generation Data Unions framework used the Monoplasma off-chain solution for scalability, while Data Unions 2.0 is fully on-chain and implemented as smart contracts. While the gas costs on Ethereum mainnet are currently sky-high, Data Unions 2.0 can run on any Ethereum-compatible sidechain for scalability and affordable cost of operation. The first supported sidechain is xDai, and other options may be added in the future.The benefits of having Data Unions fully on-chain are many. First of all, the security of the system becomes as strong as the security of the underlying blockchain itself. Secondly, Data Union members and builders get to benefit from composability, network effects, and services available on the same blockchain, particularly DeFi platforms.Previously, Data Union members needed to withdraw their earned tokens to Ethereum mainnet in order to do anything with them. With current gas prices on mainnet, this has become unfeasible, and tokens are being kept “hostage” by high transaction fees. With Data Unions 2.0, tokens can optionally be withdrawn and used on the sidechain — for example swapped to some other tokens using a DEX present on that sidechain. The transaction fees on sidechains are orders of magnitude cheaper than on Ethereum mainnet, making it much more feasible to move around even small amounts of value. On xDai, transactions currently cost only fractions of a cent, compared to transactions costing tens or even hundreds of dollars on mainnet.In addition to the architectural changes, the upgrade also comes with some new features requested by builders. Data Unions now support gasless withdrawals via metatransactions. This means that Data Union admins can optionally pay the transaction fees on behalf of members, potentially enabling a better user experience. Another new feature enables tokens to be directly deposited to a particular user’s balance in the Data Union, allowing Data Union builders to (for example) run referral campaigns that pay out personalised rewards (as opposed to the equal revenue sharing among members, the core feature of Data Unions).The Data Unions 2.0 architectureData Unions now live on a chosen sidechain. Information about its members is maintained in the state of the smart contract, and tokens flow in and out of the Data Union smart contract via normal transfers of ERC-20 tokens. As the DATA token originates from the Ethereum mainnet, a bridge such as the xDai OmniBridge is used to move tokens between chains.Wallets and contracts on Ethereum mainnet can still interact with Data Unions over the bridge. For example, the Streamr Marketplace smart contract can transfer tokens into a Data Union then the data product is purchased. The tokens automatically cross the bridge to the sidechain before being deposited as revenue into the Data Union smart contract.For those of you interested in seeing more of the technical details, have a look at this Miro board as well as the DU2 smart contracts repo on GitHub.Towards cheaper gas costsObviously, the extreme gas costs on Ethereum mainnet can only be worked around by moving away from mainnet. Data Unions 2.0 is a first step in that direction, enabling many Data Union members to use, trade, or transfer their earned tokens without ever bridging them to mainnet. A growing amount of DeFi platforms and liquidity is available on sidechains, and in the future tokens can even be bridged between sidechains to access additional services or liquidity. For example, the xDai team has deployed a beta version of a bridge between xDai and Binance Smart Chain, allowing tokens to be withdrawn from xDai Data Unions even to Binance Smart Chain without going through the expensive Ethereum mainnet. As sidechain and bridge technologies are developing very quickly at the moment, we expect to see many new possibilities enabled over the course of this year.That being said, any transactions on the mainnet, including bridging tokens to sidechains, are still expensive. Parts of the Streamr ecosystem, in particular the Marketplace, are still on mainnet, and purchasing access to data products can cost tens of dollars in fees, often more than the price of the data subscription itself. This severely limits the usefulness of the Marketplace at the moment, which is also the case with most other dapps on Ethereum. While Data Union teams can fall back to more traditional interactions with potential data buyers as a temporary solution, it’s clear that the Marketplace along with other smart contracts in the Streamr ecosystem will eventually need to migrate to sidechains, and the team will continue working towards this goal to completely solve the issue of unacceptable transaction fees.Why choose xDai for Data Unions?We looked into xDai, Matic/Polygon, Binance Smart Chain, and Avalanche as the potential first homes for the new release. While all of them are very interesting, the xDai chain ticked almost all the boxes for this use case, making it a great chain to kick off with:Low enough transaction fees to make Data Union member management and withdrawals feasible even for large Data Unions with hundreds of thousands of members. Swash, the first and biggest Data Union, already has over 14,000 members, and it’s likely that much larger Data Unions will exist in the future.A robust Arbitrary Message Bridge (AMB) implementation that is extensible enough to support ERC-677 callbacks when the recipient address on sidechain is a smart contract. The bridge also passes transactions quickly (in minutes).Transaction fees are more predictable thanks to using a stablecoin as the native token.The Graph indexes the xDai chain.Ramp offers direct fiat-to-xDai conversions.The team is responsive and welcomes the use case.From the perspective of the xDai chain, the Streamr Data Unions framework brings 14,000+ Data Union members as well as a handful of new builder teams in touch with the chain.To better service Data Union admins, members, and other stakeholders of the Streamr ecosystem, we are looking forward to seeing more DeFi platforms and liquidity moving to xDai in the future. As the sidechain space and especially bridges evolve over time, we’ll be keeping a close eye on other suitable chains to support. In the future, Data Unions could exist on multiple sidechains, and the preferred one would be chosen when the Data Union is created.Manually bridging DATA to/from xDaiUsers with DATA tokens in their wallets can bridge them to/from the xDai chain using the xDai OmniBridge UI, which supports Metamask and WalletConnect-enabled wallets.To see and use tokens in your wallet on xDai chain, configure xDai chain using these instructions, switch to that, and then add a custom token 0xE4a2620edE1058D61BEe5F45F6414314fdf10548, which is the DATA smart contract on xDai.To make transactions on the xDai chain, you’ll need some xDai for gas. You can buy some via Ramp, or transfer DAI from mainnet using the bridge.SDK versions compatible with Data Unions 2.0JS SDK: version 5.0 onwards. It should be noted that 5.0 and newer versions are not backwards compatible, i.e. they only support Data Unions 2.0, while 4.x and older versions only work with Data Unions 1.0.Java SDK: version 2.1 onwards. Older versions did not implement Data Union interactions at all.Migrating first-generation Data Unions to 2.0The Streamr team will get in touch with everyone running Data Unions on the old platform, and help them upgrade. Using a utility script, a mirror copy of the old Data Union will be created on Data Unions 2.0, and the Streamr team will provide the token liquidity to create and maintain the copy. Then, the end-user application for the Data Union will be updated to use the new version of the SDK and pointed to interact with the new Data Union instead of the existing Data Union. Eventually, once all migrations are complete, the old Data Union framework will be shut down.Join the Streamr developer ecosystemThe best place to learn about building Data Unions and working with Streamr open source technologies is the docs. To get in touch with our team for support, make sure to join our Discord server. See you there!Originally published at blog.streamr.network on March 17, 2021.The Data Unions upgrade is live — DATA token goes multi-chain with xDai was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 17

Data Unions — 12 months in ...

Data Unions — 12 months in reviewAs the launch of the Data Unions 2.0 framework draws near we thought we’d do a roundup of all that has happened on the Data Union front for the last 12 months.This time last year, just as the lockdown was about to hit a good proportion of countries around the globe, Streamr settled on its annual strategy. We would build out the Data Union ecosystem whilst continuing to work away at our main vision: a P2P pub/sub Network for real-time messaging. There were two major business reasons for this. The first was: Data Unions utilise the Network, and thus provide serious early adoption. The second reason was: Data Unions in and of themselves are a worthy endeavour for Streamr to pursue as a way to change up the global data economy. How do Data Unions do this? By allowing data from potentially hundreds of millions of people to be bought and sold within a far more solid consensual trading framework (i.e. share data and earn revenue). Effectively Streamr is attempting to port Web2 data into the Web3 space, turning the largest digitally native product on the planet — information — into a Web3 asset to be bought, sold and even financialised.It’s a big vision. Here’s what we achieved so far:Laying The GroundworkProduct DevelopmentWithout the software, there’d be nothing for Data Union operators to build on, so getting this right was obviously key. Though an MVP version of the framework had been available through 2020, the beta launch went live in June providing a GUI for builders through the Streamr Core app. In October, we finally went fully live and pushed out our marketing efforts with the public launch of Data Unions 1.0.The GUI through the Core app also provided a certain level of sandboxing for builders. And now, Data Unions 2.0 is a week away from release. It’s a very different beast. Instead of Streamr running a server to maintain the Data Union backend, the smart contracts which split revenue between members and operators, and manages member joins, will be on the Ethereum Mainnet with a sidechain providing a scalable means to make those payments. Smart contract audits are complete and the frontend team is exercising 2.0 functionality with the JavaScript client.The final hurdle, as Matthew Fontanata Streamr’s new Head of Ecosystem says, is the bridge allowing members to exit Data Union payments from Ethereum mainnet. Launching on the xDai chain is what the dev team has opted for.Developer ResourcesBefore onboarding to the Data Union stack could begin we also had to ensure that the requisite resources were in place. Creating dev docs, video tutorials, a new data union product creation flow and other useful materials such as blogs and a dedicated web page took up most of the summer.https://medium.com/media/e2c531b743ea6e634e6ba632799c3793/hrefGrantsThe third part of laying the groundwork for success was setting up a more streamlined grants system. In 2019, our grants system was run by part-time community members. Last year we took most of that work in house to speed up and professionalise the process; bringing the funding and those aiding the development within Streamr, closer together. We’ve now issued over 30% of our grants from our pool of 10M $DATA — many of which have gone to Data Union builders, and with a reworked web page, developers have a much clearer route to applying to those grants.Building external networksThe final part to laying the groundwork for Data Unions’ success was ensuring that external players from business, academia, and policy activism knew Streamr’s mission and could help guide us to make further connections.Setting up our advisory board with some real practitioners and thinkers such as the Data City’s Alex Craven, Radical Exchange’s CEO Matt Prewitt, Sussex University’s Maria Savona, and The Data Union’s President James Felton Keith’s was a crucial first step. They have all been massively helpful in pointing out problems, spreading our message, making connections, and aiding Data Unions like Swash by giving guidance on everything, from data structuring for buyers to marketing, messaging and ethics.We also joined the My Data Global network and registered as an operator ourselves, though strictly, we are infrastructure builders for operators. This has given us access to other builders in the space which we hope in time will lead to more Data Unions being built on the Streamr stack. At the end of last year, we were the gold sponsor of their annual global conference.Ecosystem growthStart up BuildersLast year the majority of our internal developer firepower went towards getting our small startup Data Union builders both interested in the stack and then off the ground. Work slowed over the summer as we waited to on-board a new someone to head up developer relations and ecosystem growth, an internal pick: Matthew Fontana. In the last few months this work has expanded again and to date there have been some wins and losses.Swash is the most developed of all the Data Unions built on the Streamr stack. As committed Streamr members will know, it’s a plugin that functions on all major browsers, including Chrome and Firefox, collecting rich seams of web browsing information from members in order to monetise it on their behalf.Ebrahim Khalilzadeh, lead developer for Swash, began his Data Union journey back in Feb 2019 and it has been remarkable watching his progress:Swash membership has grown 10x in a year, crossing the 10,000+ member mark a few weeks back.He’s built out a team of four developers, hired a CEO and CMO brought in a handful of advisors.Swash was selected by Outlier Ventures for their start up programThey started selling their data on Streamr and the Ocean Market data marketplace, where their products have more than €300,000 in staked value.Swash and now looking towards a first serious round of funding.Xolo is the working title for a project that collects health data from integrations with high end Withings smart watches. It is at the prototyping stage and is now looking to onboard its first test users and work to refine its data product offering.Unbanks is a project being built by devs with a background in banking, real-time data and application development. Registered as a PSD2 company under open banking regulation, Unbanks will be able to read the banking transaction data of its members and help monetise that on their behalf. It has a lot of potential.MyDiem has now been siloed because Gang, who was leading the development from China, had to return back to his PhD. The mobile app, which collects information on how members use other apps on their phone, is now open sourced and waiting for an intrepid team to take it over for development.Partnering with Lumos Labs, we also launched a hackathon in India to get Data Union ideas off the ground and tested. It was quite exciting to see all of the new ideas being built on the framework. The top five teams presented at the Demo Day on Saturday for a chance to win a $5,000 grant. A roundup of the hackathon is coming soon!Enterprise outreachWork on the enterprise front began a year ago. The idea here was that we could get large businesses to remodel the way they permission data and then more easily monetise it by adopting the Data Union share and earn framework.The industry vertical we chose to target was telecoms because mobile network operators (MNOs), most especially in Europe, have access to a lot of backend data and the potential to access far more through frontend apps. But currently they find the regulatory consent, and privacy hurdles very difficult to overcome. Towards that goal, we partnered with GSMA -the consortium at the heart of the mobile industry.With GSMA’s help we devised a programme that MNOs could pay a fee to join. The programme would last four months within which we would build out a pilot app and undertake market research with their users to test the feasibility of creating a MNO-operated Data Union. It was an exciting idea and one that GSMA was eager to promote to their membership. We set ourselves a tight deadline for MNO onboarding so as not to overrun on costs at our end.Although we had several meetings with MNOs at the highest level — including CEOs — we ultimately ran out of time to onboard them. As well, lockdown made the sort of in-person meetings where trust can be built, impossible. Furthermore, GSMA underwent an internal reorganisation due to the fact that a major source of their income -Mobile World Conference — had to be cancelled twice in a row. In hindsight, we would have given ourselves more time to onboard MNOs. But another stumbling block was the use of crypto to distribute payments, something which of course is a vital piece of technology in order to distribute small amounts of value to multiple members. Big enterprises aren’t yet ready from a legal standpoint to embrace crypto but that of course is changing every month.Data Unions marketing and messaging:We’ve had some great marketing and PR wins:Academics, policy makers and activists, including one of the world’s foremost computer scientists are now talking about Data Unions.120,000 hits now on Google for the phrase “Data Union” — Streamr is at the top.82 press mentions in 2020 including Yahoo Finance, BBC, Coindesk, the FT, Cointelegraph, Forbes and Harvard Business Review25 conference appearances despite the pandemicSeveral online meetups as part of the Hackathon and lots of community AMAsSocial media engagement, including hit tweets, and 47k views on the Data Union Youtube video.Digital Planet - Can AI predict criminal behaviour? - BBC SoundsExplaining the concept of Data Unions on the BBC’s Digital PlanetLobbyingPerhaps our biggest standout success this year was Europe embracing Data Unions in the form of new legislation and grants. In November, the European Commission released the draft of the Data Governance Act that sets out a licensing and regulatory framework for data unions — or data intermediaries as they call them — which will give this nascent sector a huge boost in terms of funding, trust, stability and assurance and advertising a direction of travel to the world. They have also announced that €2bn in funding will be made available for those seeking out to build enabling software and Data Union projects.If that was not enough, then a second piece of EU legislation, the Digital Markets Act will also unlock user data from the big tech companies like never before. As currently drafted, the act seeks to establish realtime portability rights for users of ‘gatekeeper’ platforms. Should this provision pass into law, this means that Data Union operators will be able to utilise streams of information from iTunes, Amazon and Google etc, and onboarding to Data Unions will happen in a comparative flash with access to user information coming through APIs,not through a plugin or secondary application.After nearly a year-and-a-half of lobbying by Streamr and our friends in RadicalxChange and MyData Global, these regulatory changes are an extraordinary win. With the EU backing Data Unions, drawing up new rules, providing money, and also permitting far more data from the big Silicon Valley gatekeepers to come on board, it really is just a matter of time before even more versions of Data Unions come online and start succeeding in a big way.Where is this all going?It’s hard to doubt the potential for success. Other web and mobile Data Unions like GeoDB, Panel App, TapMyData and MobileXpression already have millions of users between them, who earn income from filling in surveys, or permitting their browsing or location data to be read and monetised in a privacy-protecting manner.Data Union operators have also organised themselves into a lobbying, information sharing, and standard setting ecosystem under the MyData Global banner. That organisation counts nearly 30 projects as builders of these new, user focused, data organisations and over 100 associate member bodies.But the real insight here might be that Data Unions are acting as a bridge between Web2 and Web3, helping to convert data from the tech giants into Web3 assets. If we can pull this off, it’ll be a huge victory. Not just for ordinary people who will obviously benefit from the remuneration of the data but also from secondary innovation based on data sets that are no longer siloed. This will also be a massive win for the Web3 space, as the largest digitally native asset in the world — information — floods into the space and starts to get traded and even financialised.https://medium.com/media/380e1e4b39b8340ec15422dcd889b580/hrefWe’re excited to be a part of this and can’t wait to see what the rest of 2021 holds in store. Feel free to leave any questions for me in the #data-unions channel on Discord. I’ll be running a Data Unions AMA on Thursday 18th March at 3pm UTC, so I look forward to seeing you there.Originally published at blog.streamr.network on March 11, 2021.Data Unions — 12 months in review was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 11

News: Tracey integrates wit...

Tracey, the seafood tracing app developed by TX on top of the Streamr stack, announced today several new DeFi integrations. Tracey helps small-scale fishers in emerging markets gain access to finance by sharing verified information on their seafood catch and trade data. The trade data is used by DeFi and CeFi lenders to ascertain creditworthiness and offer suitable microfinance based on the eligibility of the borrower.By integrating with Binance Smart Chain (BSC), Tracey users will gain access to lenders within the BSC ecosystem. The EasyFi lending protocol joins as an additional lender. EasyFi will conduct creditworthiness analysis and offer microfinance through its own lending ecosystem. UnionBank, an already existing partner of the Tracey project, will continue offering CeFi lending solutions. In addition to enabling lending services, BSC will be used as the project’s blockchain solution.The micro SME market has traditionally been excluded from institutional finance due to the difficulty assessing creditworthiness of individuals who often hold no bank accounts. The new De-Fi integrations will enable lenders to provide further liquidity for this highly untapped market.Expanding to crypto lending in the Tracey project removes the geographical barriers to scaling the product, which exist with traditional banking institutes. Binance’s crypto exchange is the largest in the world by trading volume and has a user count of 12 million users globally. Through this collaboration, the Tracey solution will be able to facilitate lending in 180+ countries.Binance will participate in several rounds of pilots, the first of which is taking place this month in the Philippines for the WWF-led Fisheries Improvement program. Post-pilot, the product has the potential to roll out to other small-scale operators in Fishery Improvement Project (FIP) sites as well as Aquaculture Improvement Projects (AIPs) around the world, and later, to support other micro SME markets.“TX is engineering value from data to deliver solutions that help solve global challenges around poverty by improving the livelihoods and welfare of actors within the micro SME markets of emerging countries,” explains TX Managing Director Ben SheppardSusan Roxas, Fisheries and Finance Lead of the WWF Coral Triangle Program, says “We are excited by the innovative developments and the potential that this collaboration offers in solving challenges around first mile traceability of seafood. Industry members of the Global Dialogue on Seafood Traceability (GDST) recognise the challenge in obtaining first mile data from small scale fishers. We believe that by providing key incentives, the appropriate technology is now making this possible”.Going forward the Streamr Marketplace will be used to offer Tracey data streams. It presents a new way for fisherfolk to generate an additional source of income by crowdselling their catch data within a Data Union. Another opportunity could be the sharing of catch data for scientific purposes.Originally published at blog.streamr.network on March 10, 2021.News: Tracey integrates with Binance Smart Chain ecosystem enabling DeFi lending for micro SMEs was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 10

Meet the top 5 projects fro...

The Streamr Data Challenge, based in India, brought together a talented set of developers in an open innovation program, inviting them to create valuable data economies through the Streamr Data Union framework. By integrating with the Data Union framework, participating developers can easily bundle and crowd sell the real-time data that their users generate, gain meaningful consent from those users and reward them by sharing data sales revenue. In effect, this enables the developers to create an open data ecosystem, thereby democratising the data economy.Five exciting months and 200+ registrations later, we’re finally ready to announce the Top 5 teams of the Streamr Data Challenge who will present the solutions they built at the demo day for a chance to win a grant worth $5000 USD.Here are the Top 5 projects chosen to be a part of the grand finale:PlantPay by Team BinaryPlantPay is an initiative by Team Binary, a four-member group of students from the SRM Institute of Science and Technology. PlantPay incentivises users to plant saplings and nurture them. The app encourages people to participate in creating a greener world by materialising accountability and maintaining a tamper-proof ledger that can record activity for further incentivisation. Plantpay’s Data Union records data on the nature of the plant, health conditions, soil, location etc, all of which can help stakeholders in this space to gather meaningful insights.Geeks Squad’s agriculture query platform for farmersGeek Squad is a four-member team from Walchand College of Engineering who were inspired to provide a one-stop forum to help educate farmers and answer agriculture-related queries. As a part of their Data Union, Geek Squad gathers data such as pictures of crops grown, soil-related data and the health of these crops. This data can be put on a marketplace for wholesalers to assess the crop being grown in a particular region. The nature of soil sourced from the pictures can be used by surveyors to understand the quality of soil over a period of time in a particular region.Up-Health by Hack InversionHack Inversion, a three-member group hailing from SRM University, developed UpHealth, a healthcare app that tracks and maintains health and fitness records. The Data Union integration for the app enables the collection of data from electronic medical records, without sharing the patients’ personal information. If a patient chooses to share their medical history, they can be rewarded with Streamr DATAcoins that they can redeem on their next visit.Smart Agri Tech Kit by RedixolabsRedixolabs is a four-member team based out of the Indian Institute of Information Technology. They devised an application that will help farmers learn more about technology and the solutions that it offers. This Data Union app records farm yield, stores this data on the cloud and tracks the status of the crop yield. This data can be very useful for wholesalers or large scale restaurants wishing to source high-quality crops.Vidhira by AathmanirbharVidhira is an application developed by a four-member team from the Shri Ramdeobaba College of Engineering and Management. Their decentralized web application, Aathmanirbhar, is designed for content creators who want to showcase their work without having to worry about their content being plagiarised; their content is tagged with a unique ID for this purpose. The Data Union records this information and streams it to a marketplace where people looking to buy content can pay for it, which in return rewards the creator of the content.Our hearty congratulations to the Top 5 teams!These teams will now battle it out at the Streamr Data Challenge Demo Day for the top prize by making their pitches. We congratulate everyone from the shortlisted teams for their star efforts in creating Data Unions.Come join us in supporting these projects on the 6th of March, 17:00 PM IST onwards, register here.Originally published at blog.streamr.network on March 4, 2021.Meet the top 5 projects from the Streamr Data Challenge was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 03. 04

Streamr Marketing update — ...

Streamr Marketing update — What’s been happening? What’s coming up?This blog is designed to provide an update on Streamr marketing to date, and present the Streamr marketing outlook in 2021 and beyond. A lot is going on at a project level, in the world of crypto and in the world at large. Now is a good time to look at where we’ve come, where we are and where we’re going.Looking back to the first marketing blog I wrote when I joined the Streamr project, a few points stand out for their sustained relevance: we didn’t want to rely on unsubstantiated hype, token shilling and premature rollouts; we wanted to ‘show not tell’ the world about Streamr capabilities; we were avoiding pay-for-play tactics and we wanted to invite developers to build with a clear call-to-action in order to grow the Streamr ecosystem. A lot of progress has been made since then: new frameworks have launched, we have tested the Network, launched a decentralized governance model and are building out the Network decentralization, all whilst delivering against the project roadmap on schedule. Through all of that, we have maintained the aforementioned marketing principles in everything we have done to promote Streamr.So what have we achieved?The marketing focus of the Streamr project in the last year or so has been adoption of the Streamr stack, particularly with the launch of Data Unions, a framework that enables ethical data sharing via end-user applications. In 2020, the adoption strategy was focused on growing the ecosystem of devs building apps on the Data Union framework. So far, many of the tactics have shown some success — we have seen a growth in applicants to the Data Fund in the last couple of months since its relaunch, plus the Streamr Data Challenge, a hackathon based in India encouraging devs to build Data Unions using the framework, received hundreds of applicants, and now a Top 20 promising use cases are in production. The marketing principle of ‘show don’t tell’ was possible with regard to Data Unions because the Data Union framework is in a good position to build upon, and a great pilot application was available to ‘show’ in Swash. It was a story we were ready to tell and it is encouraging to see the growth of Swash, which has just hit 10,000 users, and the genesis of similar use cases like it.The Swash browser pluginWe have made a lot of noise about Data Unions through lobbying and PR, and as such we have seen some traction in consumer/end-user, developer and legislative spaces. The EU has taken note of the issues that we have been vocal about over the last couple of years, such as data ownership and ethical data selling, and the Data Governance Act and the Digital Services Act will be instrumental in the future data economy going forward.Data Unions have been referenced by Forbes, The Financial Times and CoinDesk — we’ve even talked Data Unions with the BBC. Beyond spreading the ethical, end user-focused message, we have been targeting developers in their spaces through focused ad campaigns, keywords and SEO. We’ve been seeing real results from these tactics, including more builders and partnerships (which we will hopefully be able to share very soon), as well as the aforementioned press. This marketing effort targeted journalists, devs, enterprises and there has been some solid traction. There will be an update blog on the specificities of Data Union ecosystem growth coming soon.Data Unions could potentially lead to large scale adoption of both the framework itself and by extension the Streamr Network. On the tech front, we will be rolling out Data Unions 2.0 in Spring, and making sure builders are aware of the improvements this upgrade has to offer. While we continue to promote Data Unions as a call-to-action and higher-level framework to build with, much of the marketing spotlight will shift towards the heart and soul of the project, the Streamr Network.The Streamr Network is the technical backbone of the Streamr ecosystem. While the marketing team has been showing (and will continue to show) what the applications supported by the Network — including Data Unions — can do, the dev team have been pushing the Network towards its most important milestones. Now it’s time to ‘show’ the Network, front and centre.Spotlight on the Streamr NetworkAs we approach the Brubeck milestone and look ahead to Tatum, the time is coming where we can now ‘show not tell’ people about the Streamr Network capabilities. Brubeck signals a landmark time for the project — anyone in the Streamr community can run a node on the Network, easily connect applications to it, and contribute bandwidth. We’re excited to roll out a simple ‘plug-and-play’ functionality to enable them to get started. Brubeck will be released along with a Network Explorer, where anyone can see what’s going in the Network and the nodes that it consists of. The time is now right for more promotion in web3 and crypto spaces, because the Network will finally start decentralizing, and its power can be shown via real demonstrations.Last year, we already talked about what the Network has to offer, with the in-depth Performance and Scalability whitepaper. It’s pretty exciting that the scalability and latency of the Streamr Network are on par with the best centralized services. But through more use cases and ecosystem growth, we can show the world what these developments mean for P2P networks and how Streamr fits into the wider world of decentralized tech.What will this look like?There are many tactics currently in development for effectively marketing the Network and the list below provides an overview of some of the main marketing aims that house them. Marketing campaigns that we’ll be running in the short term include Data Unions 2.0 and the launch of Brubeck. The tactics detailed below will factor into these campaigns and also run throughout 2021.1. Website updateTo get people talking about what Streamr brings to the table, a first point is making sure that people can understand the Network offering in a clear way. To support that, a major website upgrade is in the works. While the current website does talk about the capabilities of the Network, the new site will crystallise the project’s narrative and positioning as critical Web3 data infrastructure and a scalable, decentralized alternative to centralized cloud services. The crypto space has created its own financial systems, governance models, and disruptive technical infrastructures, and we will more strongly than ever align ourselves with this revolution.The benefits of strong SEO and keywords are a ‘behind-the-scenes’ tactic that we have been employing for some years, that have brought the project solid marketing successes in the form of inbound enquiries from potential partners, Data Fund applicants, and newsletter signups, so we will maintain that work on the website via blogs, backlinking and site optimisation.2. The Streamr Network in crypto spacesBased on site analytics and market research, we have seen that external validation is essential to encourage greater awareness of Streamr in the web3 and crypto spaces, as well as the press releases and talking heads from the team (which took focus with Data Union marketing). This external validation can come in a variety of ways — social media is one of these (see the Community Growth section below), and targeting personas in crypto and web3 is another.With a clearer message and something to ‘show not tell’ about the Network, we will be inviting thought leaders and influencers in crypto to review the Streamr project and, better yet, try the technology for themselves by running a node and connecting it to data. This method of spreading the word to relevant influencers is something we have not yet pursued in a programmatic way, since the Network was still being built. With the Network becoming more realised, there is a bigger story to tell that may capture the interest of technologists and people who are passionate about decentralization. These people can now learn about Streamr through participation, and spread the word to their audiences.In its upcoming form, there is more opportunity for creative marketing with the Network. The Streamr project team is currently brainstorming unusual and interesting demos that show what can be achieved with a truly trustless and decentralized network. We aim to build some of these use cases in-house, not only to share and encourage sharing, but also to inspire others to become part of the ecosystem and harness the potential of the tech. To encourage innovation with the Network, we will also be running an open grants programme, inviting devs to create and share their own Network-powered, decentralized data-driven apps for rewards.More coverage in crypto and Web 3 spaces can also be won through partnership announcements, which generate some great organic interest if the story is sufficiently compelling, as we have seen very recently. The good news is that there are more partnerships in scope for 2021, which we will definitely be making noise about when the time is right.3. Node incentivisationAnother way we can ‘show’ the story of the Network is via clear growth. To support with scaling the Network, marketing will promote a node incentive programme that will be developed and launched by the project team this year.This programme will include incentivized testnets, encouraging node operators to join particular streams for rewards, and participants will be able to visit dedicated spaces on Discord for tech support. The aim is to encourage everyone, at different levels of experience, to participate in the Network, and we’ll be working toward a ‘one-click’ functionality, with associated tools and materials to onboard users and give them all an opportunity to earn rewards.4. Community growthThe Streamr community is growing every day. We have already hit close to 400 members on Discord, a community channel that has been live for only a couple of weeks, and we want to increase that momentum throughout the year. One of the metrics for marketing success is the growth of all our community and social channels. Although all marketing ties together and social media awareness is different to market awareness, there will be concerted efforts, via advertising and promotion, to build the presence of Streamr in all channels where crypto and web3 conversations thrive.As well as that, we will be fostering space for more openness and project transparency. In the next few weeks, we will be opening up project team discussions, including updates, planning and brainstorming, to the community. Anyone who wants to see more about what we do on a biweekly basis, and get more insight into how the project runs, will be welcome to attend project discussions in a ‘Town Hall’. In this way, the community can get updates first hand and speak directly with the team about these updates in real time.5. Events / Hackathons / IdeathonsAlthough still somewhat restricted due to the pandemic, events are important when it comes to the world of data and blockchain. You will still see members of the Streamr team speaking at online and virtual events that are relevant to the Streamr project.https://medium.com/media/1d0e1909962cb9d6f8c8a2177d95581c/hrefWe’ve recently announced the Top 20 projects from the India-based Streamr Data Challenge, and more hackathons could be in scope later this year, as well as ideathons about applications on the Network.How can you support Streamr Marketing?So what can you do, as a member of the community, to support Streamr marketing?Let us know where you want to see Streamr. Where do you go for your crypto news? Which voices in crypto and web 3 would you love to hear talking about Streamr? With the Network focus, now may be the time to show them what we can do, and if their platform is a good fit for the Streamr message, we will surely pitch our story to them.Share your memes! — The meme channel on Discord is thriving and we may even be rewarding the best memes there very soon. But this creativity need not be restricted to our community only. Share your memes on Twitter and encourage others to enter the world of Streamr.Spread the word! — If there is a launch or a big Streamr announcement, or even something general going on in the world that is relevant to the project, mention Streamr. Tag us. Tell your networks, tell your friends.Let us know what you think about running a node in the Streamr Network. We want to make sure we understand the motivations of people who might currently be ambivalent about or interested in adding a node, so that we can refine the best way to speak to them. Your input is always appreciated, so please take 5 minutes to fill in this survey, and let us know your thoughts.Keep up to date — Make sure you’re signed up to the monthly Packet Switch newsletter and join the Streamr Discord server. In future, we’ll share the highlights of the marketing strategy on Twitter, so we encourage you to follow Streamr there if you don’t already.I hope you’ve found this Marketing update illuminating. It’s worth noting that Streamr must be focused on building the underlying infrastructure for the project to thrive, and marketing will promote the associated project strides and use cases that emerge along the roadmap. Without the infrastructure in place, marketing the token in isolation would be toothless because the token on its own has limited utility. Token gains, when they do happen, are a positive side-effect of effective holistic marketing. In terms of these plans, there is still some refinement taking place, but we’re happy to share the marketing direction and some tactics that we developed at the start of the year.Next up is marketing delivery, and we’re excited to get started. Feel free to leave any questions for me in the #marketing channel on Discord. I’ll be running a Marketing AMA on Tuesday 2nd March at 3pm UTC, so I look forward to seeing you there.Originally published at blog.streamr.network on February 25, 2021.Streamr Marketing update — What’s been happening? What’s coming up? was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 02. 25

Meet the finalists of the S...

After four exciting months, we’re ready to announce the shortlisted Top 20 teams of the Streamr Data Challenge.The Streamr Data Challenge engaged developers to create more valuable data economies through Data Unions. So what are Data Unions? They are a means to create ethical data products that reward data providers and enable buyers to extract insights. Data Union builders can navigate privacy and consent issues around the monetisation of personal information. By integrating into the Data Union framework, or building a Data Union app, interested developers or platform owners can unlock and crowdsell the real-time data that their users generate and reward them by sharing data sales revenue.Once the Streamr Data Challenge opened, we received very interesting submissions across several industry verticals with out-of-the-box Data Union integrations. The winners of the contest stand to benefit from the experience of learning more about data privacy and decentralized data economies, in addition to the $5000 grant.After much deliberation, we have chosen the following projects to be a part of the first shortlist:AirPower by MugglesMuggles, a four-member group from Chitkara University, have built an all-in-one energy management solution on an IoT- based infrastructure.Hack Elite’s Birds Voice Recognition ProjectHack Elite, a two-member team from the Indian Institute of Information Technology, Nagpur, has come up with a novel solution to make bird-watching a better experience. Automated classification allows hobbyists to easily retrieve information on the bird species heard.Plic by PliclyPlic is a three-member team from the PES University, with a vision to provide real-time posture correction for laptops and monitors, without any wearables.Kisan CropAllot by Gray MattersGray Matters is a four-member team from the Indian Institute of Information Technology, Allahabad. They built Kisan CropAllot, an ML-based crop management and recommendation system for farmers.PlantPay by Team BinaryPlantPay is an initiative by Team Binary, a four-member group of students from the SRM Institute of Science and Technology. Simply put, PlantPay incentivises users for planting saplings and nurturing them.GlobeTrotter by HackerzHackerz is a three-member team from PES University that developed GlobeTrotter, a web app that connects verified local guides with travellers for tours.Healthsy by Bug BustersHealthsy is an app that will predict and verify diseases at home, using the power of Machine Learning and mobile devices. It was developed by a four-member team from the Indian Institute of Information Technology, Nagpur, Healthsy’s Data Union gathers patient information such as medical history, age, allergies and prescribed medicines.Geeks Squad’s agriculture query platform for farmersGeeks Squad is a two-member team from Walchand College of Engineering who were inspired to provide a one-stop forum to help educate farmers and answer agriculture-related queries.UpHealth by Hack InversionHailing from SRM University, Hack Inversion, a three-member group, developed UpHealth, a healthcare app that tracks and maintains health and fitness records.Smart Agri tech Kit by RedixolabsRedixolabs is a four-member team based out of the Indian Institute of Information Technology. They devised an application that will help farmers learn more about technology and the solutions that it offers.Air AwareAir Aware is an initiative by Divij Jain from the Guru Gobind Singh Indraprastha University. Air Aware detects the air quality of a given region in real-time using a HEPA capture material. It is also equipped to detect details like the presence of pathogens and disease-spreading bacteria.Host Engine by Hack-n-tossHost Engine is a decentralized web hosting solution using blockchain that uses the Interplanetary File System (IPFS) protocol to decentralize website hosting needs and avoid issues arising from server downtime. Host Engine was developed by HAck-n-toss, a 3 member team from SRM University.NetworkDexterNetworkDexter — X is an android app built by a two-member team from Chandigarh University. With a motive to connect everyone independent of the internet or Network, NetworkDexter leverages every feature of the average phone including Wi-Fi/BLE/HotSpot and as well as use the power of Audio using Ultrasonic waves to connect phones in the vicinity.Employ by Insignia EncorpInsignia Encorp, a four-member team from the Velammal Engineering College, developed Employ, a Platform as a Service application built to provide employment opportunities to the uncategorised workforce of the Indian economy.Pragati by TheCodeClutchPragati is a web app for economically underdeveloped women to become financially independent, by allowing them to post self-made products/services. It is also a marketplace for handcrafted goods.Digital RiseDigital Rise was built by a team of two from SRM University. It is an AI-based online assessment monitoring software that scans for malpractice using eye and face tracking and a custom warning algorithm.BlueprintBlueprint was created by a two-member team from the Institute of Engineering and Technology — Devi Ahilya Vishwavidyalaya, Indore. It is an interactive video classroom making online classes interactive and fun.Farmernest by EnergizersFarmernest is an application that cuts out the middlemen in the agricultural supply chain and helps farmers get direct access to sellers.WastEarn by Black PearlWastEarn is a waste management solution that involves purchasing recyclable garbage from waste producers that are further sold to recyclers. The average citizen can earn by selling household waste. WastEarn was developed by the four-member team named Black Pearl from the Indian Institute of Information Technology, Vadodara.Vidhira by AathmanirbharVidhira is an application developed by a four-member team from the Shri Ramdeobaba College of Engineering and Management. This application ensures that the content created is attributed to its makers without any room for plagiarism. Content created by the users is tagged with a unique ID.Our hearty congratulations to the shortlisted folks!What’s next?In this phase of the Streamr Data Challenge, we’re going to hear the top 5 teams make their pitches to compete to win the $5,000 grant at Demo Day.The Demo Day will also feature a panel discussion on the Internet of Ethereum. With respect to the foundational layer of application grade blockchains achieving a level of maturity, the race for mass adoption is truly on. There are layer 2 protocols that make building dApps, onboarding and retaining users simpler for dApp developers. But what does this ecosystem look like and what is the role of these stakeholders on the Internet of Ethereum?This panel discussion will feature experts from the blockchain space who understand decentralized performance, developer and user experience, and most importantly data handling. The panel will discuss how all of those elements come together to realise a decentralized internet.Meet the speakersHenri Pihkala — Founder & CEO, StreamrSandeep Nailwal — Co-founder & COO, Polygon (previously known as the Matic Network)Harsh Rajat — Founder, EPNSAniket Jindal — Founder, BiconomyGanesh Swami — Co-founder, CovalentRaghu Mohan — Co-founder & CEO, Lumos Labs (Moderator)We believe that the Demo Day will conclude the Streamr Data Challenge by enabling innovation with decentralized data economies, support the upcoming projects in this space, and bring together a community of kindred folk who can contribute to building consumer inclusion in today’s data economy.We would love to see you at the Streamr Data Challenge Demo Day on March 6th, starting at 5:30 p.m. IST!Join us by registering here.Originally published at blog.streamr.network on February 24, 2021.Meet the finalists of the Streamr Data Challenge was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 02. 24

Dev Update, December — January

Dev Update, December — JanuaryWelcome to the January 2021 developer update. It’s been a strong start to the year , with great progress being made toward one of the biggest milestones of the project — Brubeck. The Brubeck milestone signals a phase transition for the project, where anyone in the Streamr community can contribute their bandwidth by running a node on the Network. At the crossing of this milestone, the Streamr website will see a major upgrade that will include the new Network Explorer which is now feature complete.Brubeck will highlight the vast potential of the underlying infrastructure that currently powers Data Unions. For example, an exciting use case that we see activating in 2021 is the direct adoption of Streamr’s underlying decentralized pub/sub protocol. After all, if a dApp relies on centralized cloud infrastructure is it really decentralized? We look forward to sharing more about these use cases in the coming months.Beaming StreamsStreamr community builder, Sergej Müller built a cool Grafana plugin to beam Streamr data into Grafana’s real-time dashboards. This development is especially timely, given our recent governance vote to deactivate Streamr Dashboards.https://medium.com/media/6d9206ac666402f321bd7f202a294872/hrefFrom the same brilliant developer, we also have two new free data products on the marketplace, Growth of the Streamr Marketplace, an aggregated view of real-time key figures of the Streamr Marketplace, and Ethereum Gas Prices,a real-time Ethereum gwei gas price feed.The NetworkOn the Network front, the team has been focusing on stability and robustness improvements. Here are some of the highlights:Network & protocol codebases converted to TypeScriptSignificantly improved topology stabilisationBroker node connections with WebRTCNetwork topology metrics endpoints addedNode specific metric reportingMigrated our DevOps flow to GitHub actionsbody[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;} — @mattofontanafunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Data UnionsData Unions 2.0 is ready to land, however the final hurdle is in the bridge exiting from the Ethereum Mainnet. Launching on the xDai chain is looking most likely, though we welcome late challengers such as Avalanche.The requirement for the side chain is that it is EVM compatible with a battle-tested token bridge which is operated in an at least somewhat decentralized way. The bridge must support ERC-677 tokens on the sidechain side, and must be able to invoke the callback function when tokens are transferred over the bridge. We also want Data Unions to be on a sidechain with strong traction in the crypto ecosystem.xDai and its TokenBridge fulfills all but one requirement; ERC-677 calls across the bridge. They have recently merged this improvement, however it requires a security audit, which can take some time given the current crypto market cycle. xDai chain is backed by Maker, Gnosis, and some other projects, and there is some DeFi traction happening on this chain.Avalanche (C-chain) is EVM compatible, although there are some differences in the JsonRPC, so it’s not 100% compatible. The Chainsafe bridge is used on Avalanche and it looks as though we may have to create a completely parallel bridge and operate it, making it completely centralized. That might be a blocker. As for traction, Avalanche has good potential due to its high visibility and market cap position.In the meantime, Data Unions 1.0 received some stability improvements and appears to now be operating well under load. Over the Christmas break, the India-based Streamr Data Challenge hackathon has been picking up momentum as we approach demo day on March 6th where the Top 20 teams will compete for the main prize.DATA TokenomicsPhase 3 of our work with BlockScience is currently in progress. So far the work has been focused on defining an “MVP” (Minimum Viable Product) of the agreement connecting payers to brokers, i.e. supply to demand.This initial Agreement model will be implemented in cadCAD, after which we’ll be able to simulate the behaviour and earnings of Brokers (node operators) under various assumptions about how supply and demand enters the system, and for example how the initial growth of the Network can be accelerated by potential use of additional reward tokens (a bit like yield farming in DeFi).The Brubeck website update will include an infographic explaining the $DATA tokenomics of these payer to broker agreements.Canvases and DashboardsThe project recently ran a decentralized governance vote where DATA token holders could decide the fate of Streamr Canvases and Dashboards. The results are now in, the vote to drop Canvases and Dashboards was passed. We all love Canvases here at Streamr but we felt as though there is more to be gained by focusing on the Network. There will be a blog post later this month discussing the timeline of deactivation but we do hope to see Canvases back in some form at a later date.Deprecations and Breaking ChangesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.Originally published at blog.streamr.network on February 18, 2021.Dev Update, December — January was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 02. 18

The SIP-1 & SIP-2 voting re...

The first round of Streamr Governance has come to a successful end and the voting booth is now closed. Let’s take a look at what’s been happening.Over the past two weeks, the Streamr community has been discussing the first two Streamr Improvement Proposals for the Streamr project. The first proposal, SIP-1, proposed a token migration to enable token economics, including an extension of the hard-coded maximum supply from 1 to 2 billion DATA. The second proposal, SIP-2, proposed the dropping of Canvas and Dashboard features. The vote went live on Snapshot on February 11th and ended on February 16th.Even though the proposals, especially SIP-1, were fiercely debated within the community on Discord and during an AMA, the results are more than clear.One community member summarised their thoughts as follows, and I think it’s quite representative of the community’s thought processes:“My first reaction to SIP-1 was: I am against this! This morning I am more open to vote YES, because of three arguments from the team and community: 1) The possible benefits outweigh the concerns by orders of magnitude, 2) Staying agile and responsive vs. “dinosaur”, and 3) it supports a move from hodling towards participating; hence it will be beneficial for me (I hope).”An overwhelming majority of 99.89% of voting power was in favour of the proposal. Even if we ignore token weight to discount the greater voting power of large token holders and just look at how each address voted, 98.98% of addresses voted for the proposal. Only 153.95k DATA were staked against the proposal. SIP-2 was voted on with even more decisiveness, with very close to 100% of votes in favor of the proposal.Only 14% of the total DATA supply participated in the vote, and that number is even fewer if tokens held by the Streamr project are excluded. While of course disappointing, the low participation rate was somewhat expected, as it is a fairly common phenomenon when projects run votes such as this. A large share of DATA is constantly held at exchanges, despite that not being the most secure practice, and withdrawing tokens from exchanges into personal custody in order to vote may be too much of a hurdle to go through for some people. On the other hand, perhaps the quality of votes wins over quantity at the end of the day; it’s reasonable to assume that those who do bother to vote are the ones most interested, informed, and engaged with the project.Based on these results, both Streamr Improvement Proposals will be implemented over the coming months.SIP-1 ImplementationThe first actionable task on SIP-1 will be to reach out to exchanges and token information sites — Binance, CoinMarketCap, CoinGecko, etc. — to learn about their processes and timelines for handling token migrations. They have all done it tens of times already with other projects, so hopefully we will discover that they all have smooth processes to support such upgrades. The timeline of the upgrade depends largely on these 3rd parties. I am expecting the preparations to take a few months, and the migration itself could perhaps start in early summer.Once the coordination work is looking good, the technical part begins. A new DATA token smart contract will be deployed with an initial supply of zero and a maximum supply of 2 billion DATA. Token holders will be able to swap their existing tokens 1:1 for the new token at any time, and in this process the old tokens are burned and an equal amount of new tokens are minted. As a result of more and more token holders going through the upgrade, the new token’s supply will grow from zero and approach the current token’s supply of around 1 billion DATA.As discussed extensively in SIP-1, increasing the maximum supply to 2 billion DATA does not increase or dilute the supply at this time. The number of DATA in existence will stay the same. Future SIPs will let the community decide if and how many new tokens should come into circulation, if any. This proposal and the migration that follows are only about technically enabling such decisions to be taken by token holders in the future.Additionally, deploying a new smart contract allows us to extend the existing ERC-20 standard of DATA by implementing ERC-677. This is useful in many situations, including moving tokens across interchain bridges, moving tokens to Data Unions, and so on.There will be a simple dApp for carrying out the token migration in a self-service manner. The Streamr team will also encourage exchanges and other custodial parties to migrate the tokens in their custody, to make life easier for people less experienced with dApps and Ethereum wallets.The new token will adopt the DATA symbol, while the old token will be renamed to DATAv1. As mentioned before, you will be able to migrate your DATAv1 tokens at any time, meaning that there will be no particular urgency to do so.SIP-2 ImplementationThe implementation of SIP-2 is more straightforward. On April 31st 2021, the Canvas and Dashboard features will be removed from the Streamr Core application and the associated API. You will still be able to create and manage streams, data products, and Data Unions as usual after this change. If you don’t currently use the Canvas or Dashboard features of the Core application, the change won’t affect you and you won’t notice any difference.The code will be archived into a fork for safekeeping and potential later use. An example of later use could be to relaunch the Canvas tooling at a later time as a self-hosted version which would connect to the decentralized Streamr Network for data.This notice period gives you time to migrate any of your Canvas-based stream processing workloads to other tools. We in the Streamr team are using a few Canvases ourselves for computing metrics, such as the real-time messages/second metric you see on the project website. It’s pretty straightforward to replace those Canvases with simple node.js scripts that compute the same results and leverage the Streamr JS library, and this is exactly what we intend to do for the few Canvas-based workloads we have internally.Computed by a Canvas today, but easy enough to achieve with a script too.Learnings for the TeamBoth proposals were far from trivial — if these questions were no-brainers, the project wouldn’t need a governance process. We’re happy to have seen so much debate and participation online during the days leading up to the vote, especially on the project Discord. We’re also happy to see all of the active community in very strong alignment after all arguments for and against the proposals were presented and explored. For future votes, our goal will be to reach beyond the 14% participation rate we saw this time. It’s only the beginning of a new era of decentralized governance, and we in the Streamr team will definitely continue to encourage and support all DATA holders in getting their voice heard.Originally published at blog.streamr.network on February 16, 2021.The SIP-1 & SIP-2 voting results are in! was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 02. 17

Streamr, TX join ATARCA con...

Streamr and TX join the ATARCA (Accounting Technologies for Anti-Rival Coordination and Allocation) consortium to explore new economic systems for industrial data markets.A new economic category for abundant goods — anti-rival goods that increase in value when shared — forms the basis of research to be conducted by ATARCA. ATARCA aims to create a new economic system in which digital goods are no longer traded with mediums of exchange, such as fiat money, but with mediums of sharing.Current data markets are built on centuries old structures of exchange, in which a scarce rival good, such as oil, is traded for its financial equivalent, money. With a newly received EU Horizon 2020 FET Open award of €2.75M, the ATARCA consortium aims to investigate new economic structures.The European Commission’s FET Open call challenges applicants to lay the foundations for radically new technologies with a potential for future social or economic impact or market creation. Despite a fiercely competitive call, with 902 proposals submitted by the 3rd June 2020 deadline, Streamr, TX and its consortium partners were successful in their application for funding and expect to kick-off with the project in April 2021.To date, technical and legal mechanisms are used to support Intellectual Property Rights (IPR), such as Digital Rights Management (DRM). These mechanisms were originally meant to incentivise the creation of digital and other intangible goods. However, they create artificial scarcity and thereby fail to most efficiently support distribution of goods which, by their nature, benefit from sharing.A new medium of sharing will be tested in two different pilot projects. In the first pilot, ATARCA will develop, together with Streamr and its consulting arm TX, new industrial data markets to test drive mediums of sharing as a means of payment within Streamr’s real-time data ecosystem. The Streamr Network will be used as the underlying infrastructure for the transmission and sharing of the pilot project’s data. TX will be handling the technical implementation of both pilot projects.The second pilot will be run in Barcelona, together with local communities using the REC, a new social currency. The REC is a citizen exchange system complementary to the Euro, to which a second dimension, that of sharing, will be added. REC is an initiative of NOVACT, the international Institute for Nonviolent Action, which promotes social transformation processes based on human rights, justice and democracy in the Euro-Mediterranean region.Countering mainstream economics, ATARCA argues that the current economic system is not fit for the 21st century, in which humans increasingly trade abundant goods with a finite medium of exchange. The term ‘anti-rival goods’ denotes a new theoretical category of goods that are characterised by abundance and that, unlike rival or non-rival goods, become more valuable the more they are being used.This increase in value is due to network effects, which draw in an ever more increasing number of users to online platforms such as LinkedIn or Fortnite. The more these platforms are populated by users, the better the experience for the individual user. The cost of onboarding an additional user is close to zero. Looking beyond social online platforms, the same can be said for coronavirus tracking apps, industrial data markets or neural networks, which get better the more they are being used and fed with information.To leverage this phenomenon, ATARCA proposes to incentivise participation through the creation of a new financial technology, anti-rival tokens. These distributed ledger technology (DLT)-based tokens are used to instantiate a new ‘substance’ of quantified anti-rival value, a medium of sharing. The smart tokens will enable efficient, decentralized, market-style trading and ecosystems for anti-rival goods. Hence, they work somewhat like money, being a store of value and a unit of account. But instead of being a medium of exchange, they are a medium of sharing.Unlike cryptocurrencies, such as Bitcoin, the value of anti-rival tokens will not be based on scarcity but on the underlying human relations. Their value reflects the way relationships are built over time through repeated interactions, by default benefitting all sides of transactions.Professor Pekka Nikander from Aalto University’s Department of Communications and Networking explains that, “in ATARCA, we create cryptographically protected anti-rival tokens. We will test their applicability to governing industrial data markets and fostering cooperation in community driven currencies. If successful, this technology will not only help to properly organise the markets for data and other digital goods, but provide the structural fundamentals of a new type of economic growth. This will allow the societies at large to more widely explore structurally new incentives for systemic sustainability and scalable systemic intelligence.”Originally published at blog.streamr.network on February 15, 2021.Streamr, TX join ATARCA consortium to explore new economic systems for industrial data markets was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 02. 15

Kickstarting Streamr Govern...

*Update 16/02/21, voting has now finished. The Streamr community voted in favour of SIP 1 & 2. See the results.Holders of DATA, the utility token that powers the Streamr ecosystem, will be able to vote on Streamr protocol governance decisions starting from February 11th, 2021. We are very excited to take this first step in empowering the Streamr community to make important decisions about the future of the project.Towards decentralized governanceSince the Streamr project launched in 2017, the core team has been making decisions about the project in order to make progress on the roadmap and towards the project vision. While efficient, such a centralized governance model is only appropriate as a temporary placeholder. Decentralized governance allows anyone with a stake in the project to contribute to steering it, enabling the wisdom of the crowd to kick in and push the project towards the best long-term outcomes.Not too long ago, the governance models of decentralized projects tended to be on the heavy side with lengthy constitutions, on-chain voting with enforceable decisions, and elections of boards and councils. We wrote an extensive governance whitepaper to survey different governance processes, both traditional and digital, to gain insights to best practices in the space.However, the DeFi boom of 2020 has shown that lightweight, hybrid governance models can engage stakeholders, offer guidance to project teams, and add value to ecosystems, while avoiding the complexities of full-blown decentralized governance. Snapshot, the awesome tool by Balancer Labs, has become the voting booth of many high-profile projects such as SushiSwap, Yearn, and Balancer itself. There is now a Streamr space on Snapshot, enabling the community of DATA holders to signal their opinions about submitted governance proposals.How will it work?The governance process will kick off with two Streamr Improvement Proposals, SIP-1 and SIP-2, as introduced below.To participate in voting, you will need to have your tokens in your own custody (in a wallet for which you control the private key) when Ethereum block 11836300 is mined, which is estimated to occur around 4pm UTC on Thursday February 11th. The voting will open around the same time.The token balances at this “snapshot block” will determine your voting power, which is proportional to the amount of DATA you hold. If you are holding DATA tokens on an exchange, or providing liquidity on Uniswap or another DEX, you will need to withdraw them into your wallet before the snapshot block occurs. Right after the snapshot block you are free to move your tokens out of your wallet — this won’t affect your ability to vote. The voting will stay open for five days and close at noon UTC on Tuesday February 16th 2021.Voting on Snapshot will not require any gas. We recommend the ubiquitous MetaMask wallet for interacting with the voting UI. Snapshot also supports WalletConnect, Fortmatic, Coinbase, and Torus.https://medium.com/media/34c93591badf1a7229d826b0d9ba54e8/hrefWe encourage you to consider the proposals ahead of time. Here are the first two Streamr Improvement Proposals:SIP-1: Token migration to enable token economicsIn the past year or so, many Ethereum projects have gone through a token migration. While it’s a somewhat cumbersome process that requires a lot of coordination with token holders, exchanges, and information sites, it gives the communities the opportunity to update their token contract to implement new standards (e.g. Golem’s GNT to GLM), change branding and support new features (e.g. Aave’s LEND to AAVE), or raise the supply hard cap to enable inflationary token economics (e.g. OCEAN token migration to double the hard cap, setting it to the originally planned value).The main reason for the token migration is to double the hard-coded maximum supply from 1 to 2 billion DATA.It’s very important to understand the following, so please read carefully:This proposal does not change the circulating supply or market cap. No new tokens are minted by passing SIP-1.For any new tokens to be minted, a separate governance proposal clearly describing the amount and purpose of the tokens must be created, and voted on by the token holders. Tokens can only be minted if such a proposal passes.This proposal is about technically enabling such decisions to be taken by token holders in the future. With the current token smart contract, minting tokens is impossible, even if the community wanted to and decided to do so.The case against the proposalMinting new tokens in the future will dilute existing token holders accordingly. While none will be minted with this decision, the decision signals that the community wants to explore inflationary reward models in the future.The case for the proposalA controlled inflationary pool is a powerful tool to incentivise network participation, as seen in almost all blockchain networks including Bitcoin and Ethereum. In the DeFi space, there are many examples (e.g. UNI, CRV, etc.) where minted tokens have been used to incentivise growth and energise the community, and often the growth has added ecosystem value orders of magnitude beyond the caused dilution. Token holders can also defend against inflation by participating in the related programs — participation being exactly the goal of the incentives.Subsequent proposals will need to be drafted and voted upon for any tokens to be minted. That said, here are some potential use cases for such tokens:Bootstrapping the decentralized Streamr Network. Similar to yield farming in DeFi, minted tokens can act as early-adopter bonuses and help focus the community’s efforts. This would also enable nodes to be rewarded already in the Brubeck phase where the Network’s usage fee mechanisms don’t exist yet.Rewarding DATA holders for voting on governance proposals (such as this one)Incentivising long-term maintenance and development of the Streamr codebaseIncentivising long-term ecosystem growth and building of applications beyond the lifespan of the current Data Fund.Why exactly 2 billion DATA?Token migrations require a lot of communication, coordination and resources to implement. While we don’t expect the full reserve to ever be needed, the hard limit should be set sufficiently high to avoid ever having to do another migration in the future. In practice, only a fraction of it may end up being created, but it’s better to overshoot than undershoot here to avoid tying the community’s hands to decide on incentive programs.Implement ERC-677The change to the hard cap is the only material change in the token migration. However, it is also an opportunity to do a few technical improvements which have appeared since the token launched in 2017, such as implementing ERC-677.This technical change will enable smart contracts to be notified via a callback function whenever tokens are transferred to them. This is useful in many situations, including moving tokens across interchain bridges, moving tokens to Data Unions, and so on. ERC-677 is an extension of the ubiquitous ERC-20 standard, 100% backwards compatible, and used by a number of well-known projects, such as Chainlink.The practicalities of the migrationIn the token migration, a new token smart contract will be deployed, and token holders can swap their existing tokens 1:1 for the new token at any time. There will be a simple UI for doing this in a self-service manner. The Streamr team will also notify exchanges and other custodial parties, and encourage them to migrate the tokens in their custody.The new token will adopt the DATA symbol, while the old token will be renamed DATAv1. A similar approach was used in MakerDAO’s DAI migration, where the new token adopted the symbol DAI, and the old token was renamed from DAI to SAI.You will be able to migrate your DATAv1 tokens at any time, meaning that there will be no particular urgency to do so, or any particular time when you need to have them in your wallet to qualify for the migration.SIP-2: Drop Canvas and Dashboard features for nowStreamr Canvases are microservices that consume and act upon real-time data, defined in a visual drag-and-drop editor. Dashboards are collections of visualisation widgets extracted from Canvases. While they have proven to be useful tools in the ecosystem, maintaining and upgrading the tooling in future milestones requires considerable resources and steals focus from more fundamental efforts such as developing the Streamr Network itself.Canvases have so far been a centralized service hosted by the Streamr core team to offer a cloud-like install-nothing experience. As the whole Streamr ecosystem moves towards decentralization as envisioned, Canvases can hardly continue their centrally hosted existence. In the original 2017 whitepaper, it was envisioned that Canvases could eventually run on decentralized computation frameworks developed by projects tackling that problem (such as Golem), but building such frameworks has proven to be more difficult than many imagined at the time, and none suitable for running Canvases are really available today.The proposal is to remove the Canvas feature from the Streamr Core application and the associated API. The code will be archived into a fork for safekeeping and potential later use. An example of later use could be to relaunch the Canvas tooling at a later time as a self-hosted version which would connect to the decentralized Streamr Network for data.The Streamr Canvases feature in the Core appThe case against the proposalCanvases have value as a tool to create simple automations and integrations based on data from Streamr streams, including simple centralized oracles that interact with Ethereum smart contracts. If the feature is removed, users will need to find other ways to accomplish what they need. Using alternative approaches may be harder than using Canvases, which are pretty user-friendly and approachable.The case for the proposalDropping Canvases will improve the team’s ability to focus on the essence of the project, the Streamr Network and its token economics, and speed up its delivery by eliminating some baggage that would otherwise need to be migrated to newer Network milestones.Canvases can be used to build automation, visualisations, and oracles — but it’s unlikely to ever become the best tool for any of these tasks, as better, specialised tools and methods are available to most developers and often being used by them already. For example:Node-RED is a popular tool for creating data-driven automation workflows, and it already supports Streamr.Grafana is a common and flexible framework for visualisations, and it could easily ingest data from Streamr streams with a suitable plugin.Chainlink and API3 are frameworks focusing on connecting data to smart contracts, with capabilities that go well beyond the simple oracles that can be built with Canvases. Chainlink already supports Streamr, and Streamr is a founding partner in API3 with an integration planned.Where can I go for further information?There will be a Discord AMA at 15:30pm UTC on Tuesday February 9th, where I’ll happily answer any questions you have on the governance process or the two proposals on the table, ahead of the vote. In the meantime, if you have any questions or comments, feel free to drop them in the #governance channel of the new Streamr Discord server, where you can talk directly with members of the team.Originally published at blog.streamr.network on February 4, 2021.Kickstarting Streamr Governance was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 02. 04

Adding Value to Indian Devs...

The Streamr Data Challenge brought together a talented set of developers who are using Streamr to build interesting solutions to real-world problems. With a vision to evangelise the value of decentralized data to Indian developers and entrepreneurs, the Data Challenge has created a community of kindred folk. On account of the progress made, here’s a round-up of this journey and interesting aspects of this Data Challenge.What is the Streamr Data Challenge and how does it add value to the Indian developer community?The Streamr Data Challenge is a clarion call for entrepreneurs and developers who are building applications that have active data economies, inviting them to innovate with privacy-preserving, ethical user data management. It shortlists individuals who have such applications, or are adding such a data layer into their existing applications, and provides them with support to build these solutions on the Streamr stack.The winners of the challenge stand a chance to win prizes from a pool of 5,000 USD, and the top 20 teams that make the final shortlist are guaranteed to win prizes.The Streamr Data Challenge not only educates entrepreneurs and innovators (both existing and aspiring) about building Data Unions — an ethical framework on the Streamr stack, that allows people to sell their real-time data and earn revenue — it also incentivises them to give enterprise-grade ethical data management the focus it deserves. The Streamr Data Challenge also provides technical support in terms of exploring use cases, integrations, etc.Learn more: https://www.streamrdatachallenge.comStreamr Data Challenge Community MeetupsSo far, we have conducted a Streamr meetup, a one-on-one-based meetup and an internal AMA for the shortlisted participants. As the first interactive touchpoint for the community, the Streamr meet-up was focused on providing developers and entrepreneurs with an introduction to Streamr, with a live demo of the platform by Matthew Fontana, the project’s Head of Developer Relations.https://medium.com/media/65797e9bfc71f7bd35e9e56627f6b545/hrefIn the context of the Streamr Data Challenge, we also created a more personalised setup with the participating teams; 10 projects and startups with credible use cases for integrating Streamr in the solutions they are developing. We had the opportunity to dive deep into the solutions that each startup was working on, in the one-to-one sessions, and provided them with the next steps to build solutions on Streamr accordingly.In addition to this, in the later phase of the Data Challenge, we hosted an informal AMA for the participants. The participants posed their questions with respect to the validity of use cases, opportunities for scaling, Streamr’s technical support, and more.Coming up!With our plans in motion, the next order of business for us is unveiling the shortlist of participants who will be eligible to win prizes from a pool of 5,000 USD. We expect to release this by the first week of February.Closing ThoughtsSo far, we’ve understood that the Indian developer and entrepreneur communities are thriving. Streamr’s value in enabling decentralized data sharing economies fits perfectly into an overall vision to preserve data privacy and create meaningful relationships between tech businesses and their users.Overall, we have received an overwhelming response from student circles, with over 150 submissions. The projects that we have received span across various industry verticals, such as healthcare, media/entertainment and IoT-based products. These entrants are eager to add a new and exciting layer of ethical data management to their existing products and services.We’re excited to progress the idea of decentralized data economies for the Indian developer and entrepreneur spaces. This might be the start of a value-driven data economy for Indian consumers and we’re happy to push this innovation forward.Originally published at blog.streamr.network on January 26, 2021.Adding Value to Indian Devs & Entrepreneurs with the Streamr Data Challenge was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 01. 26

‘‎Trust’ in a Centralized W...

Building trust among users is one of the central concerns for businesses in this data-centric world. With IoT devices, smartphones and wearables constantly streaming information about users, it is exceedingly apparent that consumer data is vulnerable to illicit or malicious activity. Being connected to individuals across the globe and having amenities at our fingertips may be the defining characteristics of the information age we live in, however, these luxuries have some unpleasant implications.The Problem with CentralizationWhen a user signs up for seemingly “free” services that make their lives easier, they also unknowingly give up their right to privacy. Most services that you can use for free usually run on ad revenue, and the best way to ensure clicks on ads is to understand the user’s preferences and target ads tailored to their liking. Some apps even collect data like performance metrics to help them improve their services, but the real problem begins when the product attempts to “understand” the user.Understanding the user involves collecting and storing multiple data points that describe the user, depending on what service is being used. For example, a navigation app may collect and store location data, whereas a shopping app may collect and store details of a recent purchase or search history. Companies that offer the service essentially own this data about their users and are free to use it to improve their own services or sell to other companies. Maintaining a central record of all users and their descriptors exposes a company’s data storage facility as a single point of failure, putting every user’s information at risk of a large scale data breach.In September 2017, Equifax, a multinational consumer credit reporting agency, reported a data breach that exposed the personal information of 147 million people. This showcased a very significant vulnerability in centralized legacy systems. Reports showed that there were several single points of failure within Equifax. Blockchain-based solutions could have provided better security and consumer data privacy.Decentralizing Consumer DataDecentralization is one of the core offerings of blockchain technology, and this makes the blockchain a formidable solution to centralized security threats. The hackers reportedly breached Equifax through a customer support portal, then moved on to other web portals and servers and stole consumer data — usernames and passwords — with little intervention from Equifax’s security systems. A blockchain-based system would have made this kind of hack near-impossible, as it would have cut off access to servers, encrypted consumer data and prevented the security breach.Breaches such as what happened with Equifax, highlight the importance of separating the different kinds of data that is collected about the consumer. WhatsApp’s recently updated privacy policy has brought these issues to the forefront, with more people realising how valuable their personal data is, and what it actually means to consent to using a service that has almost unlimited access to their personal data. Facebook alone collects thousands of data points, like the groups you are part of, posts you interact with, pages you follow etc. and the story is the same with Instagram too. As individual silos holding data, they have a limited amount of information about the user as an individual, but when the silos start speaking to each other (when they are integrated), they are able to provide a comprehensive profile on the user, their likes, dislikes, travel plans, preferences, location activity, daily schedule etc.What’s worse is that with the integration of WhatsApp (especially its payment feature), the damage caused by a data breach of an entity with that amount of personal information would be unimaginable.Emerging Solutions to Protect Consumer DataThe decentralization provided by the blockchain, along with the security and layers of encryption that it offers, allow the factor of ‘trust’ to be removed from the business owner. The users no longer have to rely on the precautions and safety measures taken by the company and can put their faith in the algorithms that govern transactions on the blockchain. This not only puts the ownership of personal data back in hands of the consumer, but also gives the user a choice as to what data they are comfortable with sharing.Additionally, with tokenisation as an option, consumers can be incentivised to share their data with service providers. The DATA token, or Streamr DATAcoin, is an ERC-20 token used across the Streamr platform. For example, Streamr allows apps to stream data that they collect to Data Unions, where they can be purchased by organisations or individuals interested in the data. In return for the data they provide, users can be rewarded Streamr DATAcoins, in addition to having control over what data is shared.Creating a trustworthy space where consumers can share intangible properties like data can seem quite challenging. However, with decentralized data economies, we can build trust and accountability into the overall data landscape.Originally published at blog.streamr.network on January 14, 2021.‘‎Trust’ in a Centralized World & Emerging Solutions for the Protection of Human Data was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 01. 14

2021 Data Monetization Pred...

First of all, what is a Data Union? A Data Union is a framework that allows people to easily bundle and sell their real-time data and earn revenue. On its own, our data does not hold much value, but when combined in a Data Union, it aggregates into an attractive product for buyers to extract insights. This is crowdselling, and has the potential to generate unique data sets by incentivising trade directly from the data producers.The problem Data Unions solveMost of the data generated by smart devices is stored in silos by the companies who provide them. By making this information more accessible through trade, and incentivising the creation of new data points, an individual data economy can be stimulated and technologies such as those imagined in IoT can be further realized.Basically, data unions provide the possibility for users to receive the value of the transfer of their information, thereby distributing value directly to the source. This differs hugely from what we see today, which is data value extraction, oftentimes without consent of the consumer.2021 Data monetization predictions from the expertsStreamr aims to change all of that, which is why we’ve gathered some of the world’s experts to get their predictions on the future of data monetization.Our experts are all part of the Streamr Data Union Advisory Board, which is composed of policy makers, academics, subject matter experts and political activists.How do you think Data Unions will take shape in the next year?Frankly, the ‘Data Union’ concept hasn’t taken off yet. But, that’s quickly changing and according to our experts, we’ll see more journalists and data professionals picking up the phrase in the next year, which will help to popularize the idea. For now, based on Google search terms, Data Unions, labor unions, and trade unions show a similar stable trajectory in terms of how many people are searching for information on these concepts. But according to our experts, the mindshare of people knowing what Data Unions are or at least having an awareness of the concept will significantly change in 2021.“The next year needs to be about raising awareness of the need for these new institutions, not just amongst the data aware but in the wider population.” — Alex Craven, founder of The Data City“This is the year folks are starting to hear about Data Unions. The first attempts at building useful ones are starting to take place.” — Brian Zisk, parallel entrepreneur and Chia Network advisor“Awareness will increase about the value of collective bargaining around data, and we will see different groups organizing to make demands for change in how data is valued and who gets compensated.” — Peter Gerard, filmmaker and a leading expert in film distributionMaria Savona, Professor of Economics at the University of Sussex, believes that digital rights activists must be aligned for data unions to take off, with the following observations:“I think Data Unions will take off conditional to framing a campaign that touches upon economic justice, agency over data, instead of data ownership. Digital rights activists tend to be against data monetisation so an argument must be won on the digital rights side.”Matt Prewitt, President of the RadicalxChange foundation, on the other hand thinks there are too many information silos, beyond the intellectual elite and into the mind awareness of most people:“In the next year, the issue will probably enter the mainstream of elite conversation but we must still do a lot of popular consciousness-raising. Most people are just starting to become more aware of the pervasiveness of data and their powerlessness over it.”How will Data Unions take hold in the next 3 years?Most of our experts believe that the world will fundamentally be changed three years from now and Data Unions will be part of a larger move toward more equitable value distribution throughout the economy, especially the digital economy.“In three years, there will be several valuable models of Data Union feeds successfully in use.” — Brian Zisk, parallel entrepreneur and Chia Network advisor“Within three years I anticipate a world in which Data Unions are beginning to return money to people for creating and sharing data.” — Peter Gerard, filmmaker and a leading expert in film distribution“Data Unions should be seeing large scale adoption and finding increasing influence over the personal data debate and technology landscape.” — Alex Craven, founder of The Data City“In the long term, I think Data Unions need to carve out their own position in the landscape of data trusts, data stewards, and other legal figures, by branding itself appropriately, so that people do not get confused.” — Maria Savona, Professor of Economics at the University of Sussex“At some point in the next two years, technology will allow us to deliver tangible value for the individual and at that point the floodgates will open. Late adopters will join and in three years participating in a Data Union will be normal and possibly mandated in some jurisdictions. Unions will be for instance certified, much like other data infrastructures, and people will pick theirs much like they pick network or cloud storage providers. The economic return for data owners will be the driver. The best unions will implement services to create added-value data assets (data curation and harmonization for instance) to multiply the value of their offer and in turn returning more “data dividends” to their subscribers. Also key will be their ability to negotiate with buyers on behalf of their members.”“Hard to predict the form it takes, but Data Unions and the legal landscape around them will become central to political and economic life.” — Matt Prewitt, President of the RadicalxChange foundationWhat’s your prediction on how global data policy will inform enterprise on data portability?“More laws will be passed which will mostly muck things up, but it will be easier to take your data with you.” — Brian Zisk, parallel entrepreneur and Chia Network advisor“There’s an inherent conflict between data portability and existing approaches to data privacy. Policies that encourage data portability are going to have to grapple with the privacy laws that make data portability more challenging and that have entrenched the power of large internet companies. Data portability is essential to competition and innovation, but governments have traditionally reacted to privacy threats with hastily or poorly written laws that can misunderstand the nature of technology and usually end up limiting scope for portability. Due to the uneven and conflicting progress between privacy and portability and the slow process for policy implementation, I expect it will take years before data portability becomes commonplace.” — Peter Gerard, filmmaker and a leading expert in film distribution“We need to find a better way to manage the relationship between individuals, their data and organisations that use this data. The hope that this passes from being a battle to pull back control to the individual into a more progressive and constructive partnership with industry and government to create equity in the relationship that benefits all.” — Alex Craven, founder of The Data City“Data portability is already in the GDPR and hopefully will be copied across non-EU countries. Data ownership is regulated only at the level of a firms’ database rather than personal data. This can be a useful rhetorical device to convince data policy makers that data subjects can monetise their own data, i.e. only showing regulators the potential inconsistencies of their policy objectives.” — Maria Savona, Professor of Economics at the University of Sussex“Portability and ownership are established principles, at least in Europe, still more in theory than practice, but the course seems to be set. The big elephant in the room are licensing and monetization. Interests and stakeholders are far from aligned around these issues and policies are much needed. Hopefully the US will follow Europe’s lead in tackling head on these big decisions.” — Davide Zaccagnini, informatics researcher at MIT.“As Yogi Berra once said, predictions are hard, especially about the future.” — Matt Prewitt, President of the RadicalxChange foundationWhat’s your prediction on how global data policy will inform enterprise on data ownership?“YOYOD — You own your own data. Doesn’t mean others don’t as well, but it’ll become more obvious and horrifying what each of us are sharing,” — Brian Zisk, parallel entrepreneur and Chia Network advisor“Many massively successful internet companies have increased their value by convincing people to voluntarily create and give away data to the companies. Examples include Amazon (through reviews, ratings, lists, affiliate programs, IMDB metadata, etc.), Google (through analytics, maps and places data, search feedback, GPS-tracking, training of image recognition through recaptcha, etc.), Facebook (social graph, interests, facial recognition training, etc.), Yelp (places and reviews), Foursquare (places and GPS-tracking). Governments need to recognize that companies reselling or otherwise monetizing user-generated data — that people gave them for free — are both inhibiting competition and undervaluing the individual contributions people have made (these companies often think it is the company’s achievement that they convinced so many people to crowd-source their data rather than appropriately recognizing that the achievement belongs to the people). I think that this is the biggest opportunity for change and where policy can disrupt the value chain most meaningfully.” — Peter Gerard, filmmaker and a leading expert in film distributionHow will global data policy inform enterprise on data licensing and monetization?“There will be a dual track where people are knowledgeable about and capture their own data and do with it as they please, contributing it to a Data Union for example or just being more conscious of what data trail they create, and the more common model where folks largely won’t care.” — Brian Zisk, parallel entrepreneur and Chia Network advisor“While data licensing can be a lucrative opportunity in some sectors, it seems to me that the most wealthy internet companies monetize their data more effectively within their own platforms. Thus I think the most meaningful changes to monetization from a Data Union perspective, will need to come from external transparency regarding a company’s data usage and compensating users based on that transparency. There is an opportunity to require open standards for how the data is tracked and accounted for, and this open standard could be a decentralized system like Streamr’s.” — Peter Gerard, filmmaker and a leading expert in film distributionWhat’s the worst case scenario of how data monetization takes shape in the future?“Laws could be passed which give data rights solely to the services.” — Brian Zisk, parallel entrepreneur and Chia Network advisor“Worst case is that policy developments continue to entrench the biggest players at the expense of competition and equity.” — Peter Gerard, filmmaker and a leading expert in film distribution“More of the status quo.” — Alex Craven, founder of The Data City“The ones vividly described by Valentina Pavel here. Overall, I think there is a lot to do to define arguments, campaigning and aiming at a global case, rather than a EU or a US centred one. For instance, what about trying to look at what happens in Taiwan?” — Maria Savona, Professor of Economics at the University of Sussex“Data Unions fail to deliver value to their subscribers. Both technological and market pressure can push things in that direction. Data Unions become another “Napster story” and power is once again centralized in some new kind of structure. That’s why a Data Unions strategy has to strive, first and foremost to deliver actual, measurable value to the people.” — Davide Zaccagnini, informatics researcher at MIT.“Data Unions acquire a reputation as an unworkable or failed proposition, and we continue to operate in an economy in which the value of information flows to the most powerful aggregators.” — Matt Prewitt, President of the RadicalxChange foundationWhat’s your blue sky vision for what happens with data policy? And how do you imagine the impact this scenario has on culture (people, government, and companies) in the next 3–5 years?“It should be clear that everyone has rights to the data they generate.” — Brian Zisk, parallel entrepreneur and Chia Network advisor“The best case vision for me would be legislation for a change in terms & conditions to introduce a citizen centric contract, where a service provider contract is a two way document where in addition to your acceptance of their terms, the provider accepts your personal data terms and conditions. The role of Data Unions would be to act as the institution facilitating and managing that contract. Perhaps the law would make Data Union membership mandatory in this circumstance.” — Alex Craven, founder of The Data City“As user-centric policies get incorporated in more and more systems, I can enforce in one click my right to carry, delete or share my data. I know who is using my information, when and for what purpose through a simple dashboard on my phone. I can seek the best Data Unions much like I seek companies whose mission and practices align with my values, while looking for the best bargain. I can connect with people who share my values and move strategically our aggregated data to influence corporations and other institutions.” — Davide Zaccagnini, informatics researcher at MIT.“Data Unions become a major locus of economic and political power, interacting symbiotically with a healthy digital economy in which diverse interests are more-or-less well represented.” — Matt Prewitt, President of the RadicalxChange foundationOriginally published at blog.streamr.network on January 7, 2021.2021 Data Monetization Predictions was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

21. 01. 07

How the data landscape in I...

Image credits: Alok Sharma, UnsplashPrivacy as a Fundamental Right — 2017The 21st century is often referred to as the information age, where data is a powerful asset, with humans generating about 2.5 quintillion bytes of it per day. In the early 2010s, India also saw an explosive rise in the number of ways information was being used when global players like Uber, Facebook, and Airbnb entered the market.The collection, processing and storage of data generated by nearly 450 million Indian users opened up a whole new world of possible data-driven innovations. Uber was able to find a route with the least traffic, Facebook could reunite long lost friends and Amazon could recommend a product that even you didn’t know you needed! This all sounded magical until users took a closer look at these recommender systems, and realised that they were being shown ads for products that they had only ever expressed interest in verbally. This led to the landmark judgement by the Supreme Court of India that ruled the Right to Privacy as a Fundamental Right in 2017.Protecting Consumer Data — 2018Following concerns regarding the privacy of consumer data, the IT Act (2000) was also amended to include the right to compensation for improper disclosure of personal information and, in addition, digital companies were required to let users know what personal information they were collecting.Although an improvement from no regulation, this amendment still does not compare to the comprehensive rules laid out by the European Union’s General Data Protection and Regulation (GDPR) or the USA’s Personally Identifiable Information (PII) Laws. While both regulations have been criticised for either being too stringent or too lenient, a whitepaper released as part of the Digital India Initiative in 2018 suggests that the Government of India may adopt a data regulation policy that would be a combination of the two prevailing global standards.Data Protection Bill — 2019In December 2019, a fully fledged Personal Data Protection Bill (DPB) was introduced to the Indian Parliament by the Ministry of Electronics and Information Technology. This bill — currently still being analysed — would regulate the collection, processing, storage, usage, transfer, protection, and disclosure of personal data of Indian citizens. It comes as an important development for global firms who may need to re-evaluate their business models, especially ones that offer free services in exchange for personalised ads.The DPB enumerates a number of features that would not only require companies to alter their business models and practices, but also some features that would add to the cost and complexity of their service. Let’s take a look at some of the features that businesses would need to keep in mind in preparation for India’s new regulation on Personal Data Protection.Explicit User ConsentThe DPB would require companies to gain explicit consent from the user, both while collecting the data and for any subsequent processing. This puts the burden of trust on the company, making them more Data Fiduciaries than Data Collectors.Personal Data as PropertyThe DPB defines that the data generated by the user is owned entirely by the user, as equivalent to personal property. While this idea sounds simple, it could be a nightmare to implement for digital companies because with physical property the owner can ask for it to be returned to them. This means that these companies would have to consider the infrastructure to remove all stored information about the user, should the user wish to terminate their membership, which could prove to be very tricky considering that the user’s data may have already been sold to a third party.The DPB classifies data in three ways, with specific regulations and allowances for each:Sensitive Data: Any information on financials, health, sexual orientation, genetics, gender status, caste, or religious belief — must be stored within Indian borders but may be processed outside.Critical Data: Information deemed by the government as important with respect to national or public security — must be stored and processed within Indian borders.General Data: Classified as any piece of information not falling within the above categories — no restriction on storage or processing.Lastly, perhaps one of the most controversial features of the Data Protection Bill is the first regulation of it’s kind in global social media:The Verification TagThis feature requires all digital companies to verify their users and sort them into one of the following categories:Users with verified registration and display namesUsers with verified registration but anonymous namesAnonymous and unverified registrations.This essentially means that these companies are now also responsible for collecting and verifying real identities of their users. To put this into perspective, Facebook has been faced with the same dilemma with over a 100 million fake accounts as of today and a verification tag will curb the presence of such accounts on various social media platforms, thereby holding the users accountable for their behaviour online.2020 — Splinternet?The Personal Data Protection Bill is currently being reviewed by a Joint Parliamentary Committee in consultation with experts and other stakeholders, but in the current state that it is in, experts predict one of two outcomes:1. Companies align with the new regulations, alter their business models and include additional infrastructure. As a result, the user gets to enjoy these global services and their benefits while still having Data Privacy regulations similar to the likes of the EU and Canada.OR2. Companies don’t align with the new regulations and are either forced out of the market or decide to pull out of India to be replaced by Indian counterparts, drawing a very intriguing likeness to when Chinese regulation forbade players like Google and Facebook from operating within China’s borders. This locational divide and eventual fracturing of digital supply chains could hinder a global economy causing the “ Splinternet “.The Future of Data Privacy in IndiaIn either scenario, the Personal Data Protection landscape in India will undergo some drastic changes in the coming years, placing a greater emphasis on the protection of it’s “netizens’” data. Global digital companies operating in India would have to rethink their business models and invest in the infrastructure required to comply with the new regulations, but is this enough?Here’s how a professional, working with and researching Personal Identity Management, puts it:With the technology that is available today, through blockchain, cryptography or edge computing, it is very possible to reimagine the data storage and processing systems that most companies use today.And it would be ideal for the user’s privacy if these regulations come about. But what will crucially affect the implementation of such policies is how the companies are penalised if they flout these norms.- Vikram Bhushan — Co-founder, HypermineIt is evident that alternatives to the current data economy exist, which hold user information at a high level of confidence. Data Unions are an excellent example of this. With Data Unions, not only is the user-data anonymised, with only essential information made available for processing or analysis, the user is also rewarded with Streamr DATAcoins for consenting to share their data. Data Unions put the user in complete control of their data while simultaneously allowing companies to analyse information that is essential to improving their services.With various paths to explore solid data privacy and with the presence of solution providers in emerging technologies, the Indian government can create data privacy guidelines that not only safeguard consumer data, but also incentivise data-driven innovation. The future holds as many opportunities as there are challenges and India is on a path that has the potential to enable a thriving data economy.Originally published at blog.streamr.network on December 11, 2020.How the data landscape in India is on the verge of a makeover was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 12. 11

Dev Update, November 2020

Welcome to the November Developer update — our last update of 2020! It’s been another fruitful month in terms of roadmap progress and Data Union growth. We’ve seen a big uptick in developer interest towards building Data Unions, in part from our recent update to the Data Fund, and also following news that the European Union mentioned Data Unions in their latest proposal.While our outward focus is on growing the Data Union ecosystem, we are of course also working hard to reach the next Network milestone — Brubeck. On that front I’d like to welcome Bernat Canal to the Streamr Network team. Bernat is an Ethereum and Node developer with previous experience in distributed network design; a nice addition to the team! Bernat is already hard at work helping us bring the Network into the hands of the community as soon as possible. As this milestone gets closer, these updates will begin to provide guidance on how to run your own Streamr node.Here are the main development highlights:Big improvement in Network throughput. Stress tests show 240 MB/s being pushed successfully through the WebRTC connection without issue.Network Explorer deployed to our staging environment, quite close to release.Shared the first prototype of StreamrFS internally. Working well.Data Union 2.0 support in JavaScript & Java clients being finalised and tested.Data Union 2.0 support being added to the Marketplace and Core apps.Governance tooling established in preparation for our first governance pilot.Data Unions 2.0, Core, Client and Marketplace developmentsA public Data Union 2.0 test environment is very close to release, hopefully rolling out in December. Data Unions 2.0 will be run on an EVM compatible chain.More details will be released closer to the release date.The Core app was updated this month, the main change being that Stream IDs are now human-readable. The new format for stream IDs is an Ethereum address plus a user-given path. For example you can now identify a stream with 0x123/hello/world or even mydomain.eth/traffic/helsinki. If you haven’t used an ENS name before, you can easily create one with a small amount of ETH on ENS Domains. Your purchased addresses will automatically appear in the stream editor for immediate usage. At a future date the Stream registry will live on the blockchain. Just remember, you may need to escape the slash depending on your coding environment. Also, please check out the breaking changes section below if you are using an email address to login to your Streamr account.Token EconomicsPhase 3 has started with BlockScience, while still running tests to validate our simulation implementation. The test plan focuses on:1) The bootstrapping phase — Streamr incentivises people to run nodes to kick off the Network.2) System resilience — how the Network tolerates knockout shocks and finds a new equilibrium.In short, this means gaining an understanding at an atomic level of how to best grow and sustain a vibrant network in a trustless manner using incentives.On a separate internal track, there’s been some initial thinking about what layers or higher-level structures are needed between the user and the low-level atomic agreements in the Network. This work will probably result in a diagram or working document to be further discussed internally. Matching the various tokenomic incentive mechanisms with desired behaviour that grows the Network is the goal here.Our first public tokenomics workshop was also recorded early this month -you can watch it here. We will be recording more tokenomics workshops as new simulations become available. The goal here is to educate, as well as to harness the insights of the community to steer some of our roadmap goals with some larger impact decisions happening early next year.NetworkThe Network team had a deep dive discussion into how the node architecture is transitioning from Corea to Brubeck. Currently we have an architecture where nodes are ‘thin’ and clients are ‘heavy’. The Java and JavaScript clients have many technical responsibilities and requirements and this leads to some duplication of code, and an uphill battle to expand to new languages. We’re addressing this by shifting the heavy lifting to the Network node.The road to BrubeckThe architecture of an end-to-end encrypted and signed message may start at a simple device, communicating in a trusted environment, to a trusted network node then entering the decentralized transport network for delivery to its recipient. This will make supporting all sorts of languages and protocols a far more scalable endeavor. This work hasn’t begun yet, but will commence once Data Unions 2.0 is production ready.HackathonsWhile we have the Streamr Data Challenge happening in India, we are also participating in the Ocean Data Economy challenge, running until January. We’ve sponsored a prize for:Best integration into Streamr real-time data streams. Utilising the Streamr JS Client to extract a static data set from a stream and forwarding it to the Ocean Marketplace to make it easy to buy & sell access as an Ocean datatoken. It makes sense for data to be sold on as many marketplaces as possible, so to that end, we’re pleased to encourage development for a real-time to static adapter for the data generated in our streams.Ending support for email accountsAs part of our journey towards full decentralization, we are ending support for password-based login after December 31st, 2020. After that date, the only way to authenticate to Streamr is by using an Ethereum wallet.Not only is this method much more secure, it is also extremely convenient, and onboards you to the world of blockchain-based digital assets and digital identity.To avoid losing access to your Streamr account, you should take the following steps by December 31st:Install MetaMask (available for Chrome, Firefox, Brave, Edge, Android, and iOS)Go to your profile page on StreamrClick the Connect wallet button and pair your wallet with your Streamr accountCongratulations, you are now future-proofed — not only for Streamr, but also myriad other decentralized applications, such as the “money legos” of DeFi (decentralized finance).Deprecations and Breaking ChangesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.December 31st: Email/password login will be removedIf you’re still using email/password-based login, avoid getting locked out of your account after December 31st, 2020 by doing the following: Install the MetaMask wallet, go to your Profile page, and connect your wallet to your account. From there on, you can use the “Authenticate with Ethereum” option on the login screen instead of the email/password combination. In addition to MetaMask and compatible Web3 wallets, various wallet connectivity solutions such as WalletConnect will be supported in the future.December 31st: Support for API keys will endThe ability to create API keys has already been removed from the Core App, while previously existing API keys will continue to work until December 31st, 2020. After this date, scripts and applications still using API keys may break. To avoid disruption, simply create or connect an Ethereum account on your Profile page, and pass the private key for that account to the Streamr client library in your application:JS library:const client = new StreamrClient({ auth: { privateKey: ‘your-private-key’ }})Java library:StreamrClient client = new StreamrClient( new EthereumAuthenticationMethod(yourPrivateKey));December 31st: Storage becomes opt-in and freely choosableSo far, the storage node operated by the Streamr team has stored all messages in all streams for a period of time configurable per stream, with the default being one year. Going forward, there may be many storage nodes operated by different parties and located in different geographies, and more control over storage will be needed.As a stream owner, you now have control over which, if any, storage nodes you’d like your historical messages to be stored on. The controls for choosing storage nodes for your stream are already present in the Core application, although at the moment there is only one option shown: Streamr Germany, which is the storage node that has been storing data so far.By default, no storage nodes are selected, and stream owners can opt-in to storage by selecting the storage nodes they wish. Starting January 1st 2020, storage nodes may only store the streams assigned to them, and purge data for any other streams. To avoid losing any important historical data in your streams, please use the Core application to assign your streams to the Streamr Germany storage node by Dec 31st to maintain status quo and continue storing your streams.(Date TBD): Support for unsigned data will be dropped.Unsigned data on the network is not compatible with the goal of decentralization, because malicious nodes can tamper with data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Originally published at blog.streamr.network on December 10, 2020.Dev Update, November 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 12. 10

Join the Streamr Team at My...

Our friends at MyData Global are organising the annualgathering for anyone in the data space. From the 10th to the 12th December, hundreds of entrepreneurs, builders, researchers and activists will be joining MyData’s three-day online conference to talk about how to bring positive change to the data economy. Some hot topics on the agenda are: data sovereignty, interoperability, and data governance, so of course the Streamr team had to get involved!.As this year’s gold sponsor we’re happy to host three different sessions during the conference.Fireside chat with MIT’s Sandy PentlandOn the evening of Friday 11th at 10:45pm UTC+0 Streamr’s Shiv Malik will host a 60 minute fireside chat with MIT’s Alex “Sandy” Pentland to talk about the business opportunities enabled by Data Unions and what technical implementations can look like.Interoperability and Data Policy PanelOn the same evening, at 9:00pm UTC+0, I will be joined by Streamr’s two advisors Matt Prewittand James Felton Keith, as well as the US Department of Homeland Security’s Anil John, to talk about data interoperability and what data policy in the US will look like under Biden’s cabinet.Workshop: Build your own Data UnionOn Saturday 12th, Streamr’s Head of Developer Relations, Matthew Fontana, Shiv and I will be hosting a workshop on How to build your own Data Union. This one is especially interesting for MyData’s Data Operators because they can leverage the infrastructure Streamr offers through its Data Union framework. Come and join us to learn more about it!Come and say hi at the Streamr boothWith all the challenges 2020 has thrown our way, many of us have a new-found appreciation for face-to-face interaction. Our virtual booth might not be as fun as the physical ones we had in the past, but nonetheless we’ll be very excited to meet you there to chat about data ownership, Data Unions and data governance. Come and say hi!There will be more than 150 presenters from all around the world at MyData global, all gathering to share their expertise. Take a look at the full program here.How you can attendIf you don’t have a ticket already, you can register here.Originally published at blog.streamr.network on December 1, 2020.Join the Streamr Team at MyData 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 12. 01

News: Streamr Joins API3 DA...

Streamr and API3 join forces to bring Streamr data streams to smart contracts and decentralized applications, via first-party oracles. Traditionally, data providers have had to rely on third-party oracle solutions to connect their data to blockchain-based applications. This meant that providers were required to either trust third-party middlemen with control over their data, or operate maintenance-heavy blockchain middleware in place of the third-party operator. API3 solves this issue of API connectivity by enabling data owners to provide their data to decentralized applications via a simple and serverless first-party Web3 API gateway; Airnode. Through Airnode, Streamr Marketplace data can be directly fed into on-chain smart contracts, cutting out any and all third-parties.This yields several benefits: by providing a “set-and-forget” style Web3 API gateway for the provider to run, in addition to a significantly lighter yet more trust-minimised operation, the API3 model has concrete financial benefits. This is because the financial compensation for data sellers involves no middlemen, and thus value can flow directly from consumers to providers. Additionally, data providers remain in full control of who gets access to their data, and are not forced to cede this control to operators external to their organisation.API3 is able to offer high-quality real-time data streams to the decentralized web. Heikki Vänttinen, co-founder of API3 explains: “This is a long-term partnership that we intend to leverage extensively. Our goal is to enable data providers to sell their data to Web3 use cases without having to rely on middlemen or operate maintenance-heavy middleware. Streamr will enable us to do this; they already have the right infrastructure in place, as well as a substantial amount of both data providers and consumers onboarded. With Streamr, we have a great partner to get more high-quality data on-chain to be utilised by smart contract developers.”Selling Data Streams to on-Chain dAppsStreamr provides the financial incentives for any data provider to sell their data to consumer applications seeking to use it. This can, for instance, be data from an IoT Data Union, that is sold as a data product on the Streamr Marketplace to a paying customer. Traditionally, this data transport between the Streamr Marketplace and the data consumer is done through an API. But dApps cannot directly connect to APIs.API3, with the help of Airnode, facilitates this connection, meaning that data can flow freely and directly from the Streamr API to the dApp. Alternatively, in case there is a need for an aggregated data feed, i.e. a dAPI, the data source can be connected to a smart contract that aggregates data from multiple sources into a single value. This creates a more decentralized, trust-minimised data feed, while still maintaining source-level transparency through the use of first-party oracles.Streamr’s co-founder Henri Pihkala added: “The Streamr protocol transports secure, cryptographically signed, verifiable data points. They would be ideal to fuel data-driven smart contracts, however an oracle is needed in between. By joining the API3 ecosystem we can create the connection and unlock new opportunities for the decentralized data economy. Streamr acts as the scalable, first destination for your raw data flows, while samples or aggregates of the data can then be connected further to smart contracts.”As a founding-level partner, Streamr will also receive governance rights in the API3 DAO. API3 is controlled by its token holder community, who vote on proposals to take part in the governance of the project. In total, 1% of the API3 tokens are being allocated to Streamr as a founding partner in API3.Going forward, Streamr and API3 are planning to build a pub/sub Airnode implementation of the Airnode protocol. So far, request/response is the pattern most commonly used for oracles. By providing a pub/sub Airnode solution, the offering gets perfectly integrated into the pub/sub pattern used throughout the Streamr Network.Originally published at blog.streamr.network on November 26, 2020.News: Streamr Joins API3 DAO as a Founding Governance Partner was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 26

“What if somebody wants to ...

Testing the tokenomics of the Streamr Network with a digital twinThe Streamr team is currently designing the tokenomics for the Streamr Network, with support from the guys at BlockScience. Streamr’s CEO and co-founder Henri got together with Marek Laskowski, Jeff Emmet, and Michael Zargham from BlockScience to chat about testing the Streamr DATAcoin tokenomics via a digital twin. Here are the key takeaways from the discussion, and a look at where the DATA economy is headed.https://medium.com/media/219fe04a65c6379d93bcaadf7bc4dd2c/hrefWhat is the Streamr Network?First off, let’s quickly recap what the Streamr Network is. In 2017, the Streamr founders set out to build a decentralized network for real-time data. The internet’s TC/IP protocol doesn’t include real-time data messaging. And that’s why users have been relying on centralized message brokers where servers relay data from data publishers to data subscribers. But, with the growth of IoT and smart services, these approaches are no longer good enough to create interconnected data economies. The solution is Streamr’s P2P approach, following a decentralized take on the pub/sub pattern.We’ll be looking at two different layers, the Streamr Network for real-time data transport and the Ethereum blockchain for payments.The role of DATAcoinThe Streamr Network token economy is powered by Streamr DATAcoin, an Ethereum ERC-20 token. The token already fulfills several functionalities such as payments. Going forward, reward, staking, and governance mechanisms will be added.Payments: This functionality is already implemented in the current version of the Network. Data subscribers and publishers can currently make and receive payments for their data streams on the Streamr Marketplace.Rewards: One of the challenges in designing the Streamr Network token economy lies in finding out how we can create the right incentives for broker nodes to contribute bandwidth to the Network. In the Bitcoin network, miners contribute proof of work by solving meaningless math problems, however, in the Streamr Network, mining is the contribution of useful bandwidth to the system.Staking: Streamr DATAcoin ‘hodlers’ can stake DATA to earn more tokens. Staking takes place on nodes, which also automatically increases a node’s reputation. Like this, DATAcoin holders don’t necessarily need to run their own nodes, but can still profit from being members of the Network’s token economy through earning a yield on their staked tokens.Governance: Proposals for the Streamr project can be voted on by Streamr DATAcoin holders to eventually enable the full decentralization of the Network. We will be running a pilot in the coming months to test out mechanisms on what a governance handover to the community could look like.Why not just ETH?This is a typical question that was raised many times during the crazy ICO days of 2017. Why not just Ether? Indeed, ETH and DAI can be used for payments on the Marketplace through our Uniswap integration. But beyond payments, ETH isn’t an adequate medium of exchange for rewards, staking, and governance. To make sure we can align incentives between node operators and the larger Streamr community, a network-specific token is necessary. Think of it like this, if you’re in the US you pay in Dollars, and in Europe, you pay in Euros. On the Streamr Network, it’s Streamr DATAcoin. Not every ETH holder is a Streamr community member, let alone operates a node in the Streamr Network. Because Ether doesn’t represent a stake in the ecosystem that we’re building.Testing assumptions about the Streamr DATA economyTo test out different economic models for the Streamr DATA economy, the team at BlockScience has created a digital twin to run simulations. When designing token economies, a lot of subjective choices in modeling are being made. The best way to validate these assumptions is to just test them all out via a test network.Ultimately, the goal is to find out how to best incentivise the Streamr broker nodes whilst securing the resilience of the Network and stimulating growth of participants. In addition, these kinds of simulations can help us to see how certain shocks or attacks would impact the Network.Going forward, the aim is to fully decentralize the Network. Just like Bitcoin miners of the Bitcoin blockchain, the Streamr Network’s broker nodes won’t be under centralized control. In contrast to the Bitcoin Blockchain, however, the Streamr Network is already pre-financialised. The challenge is to determine the optimal incentives to grow and secure the Network at scale. Therefore, the BlockScience team is looking at different node operator personas to determine which constellation of personas results in a healthy dynamic for the Streamr economy.Designing the Streamr Network Token EconomyThere are many ways that a token economy can be designed. One popular choice,for example, is bonding curves to reward early adopters. Here an algorithm ultimately determines how a system can evolve.When designing these types of systems, it is advisable not to have too many degrees of freedom, resulting in large, complex systems that make it hard to intervene should problems arise.That’s why it makes sense to start from a set of first principles. These first principles are the physics of the system, under which everything else within the system is governed.When testing these assumptions, seemingly irrational behavior from Network participants shouldn’t be excluded. As Michael Zargham put it during the video chat, “it takes a lot of energy to climb Mount Everest, but that doesn’t mean people don’t do it.”Looking at the possible choices Network participants can potentially make, there’s an idealised topology and a realised topology. In their research, the BlockScience team observes the gap between these two states to eventually find the right rules to induce behaviour that keeps the Streamr Network healthy.The road aheadToken incentives will be layered into the Network as we progress towards our fully decentralised vision. In 2021 we will begin to phase in token rewards for early adopters with a community run incentivized testnet.Originally published at blog.streamr.network on November 24, 2020.“What if somebody wants to climb Mount Everest?” was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 24

Data Union Concept Ideas

The Data Union framework is now live. App owners and developers can explore ways to create unique data sets by incentivising users to crowdsell their real-time data. In this blog are a few early-stage concept ideas that could be used as starting points or spark ideas for how Data Unions might be deployed.If you think you or your team could implement one of these projects, or have an idea of your own related to real-time data crowdselling, please head over to the Streamr Data Fund and submit an application.Concept: A plugin or app that tracks viewerr streaming habits for film/TV, music, video and more, matched with a demographic profile provided by the user.Data use case: Media producers, researchers, or advertisers could be interested in this data to better understand the latest trends and which media is popular with which audiences. This could, in turn, lead to more content that viewers like being commissioned and a better understanding of what’s popular, across platforms, in real-time.Festival or event location toolConcept: A tool or plugin that can be easily integrated into festival or event apps that gives the managers real-time location insights into movement patterns. Attendees who opt-in could be paid for their location data, receive discounts to spend in the event or be entered to win prizes like backstage passes. Location data can be paired with anonymised demographic info to increase the analysis potential.Data use case: Event managers might want to know the real-time footfall patterns of attendees to get insights into crowd flow optimisation or what the most popular event tracts were with which audiences, and incentivise users to opt in to share the data with them.Real-time environmental dataConcept: Smartphone cameras or sensors can allow people to share real-time insights about the environment. An app could be created to securely capture and send images of wildlife, natural events, UFOs, or more within the app and with a location and timestamp included to try and ensure authenticity.Data use case: Birdwatchers, hikers, or farmers might be encouraged to send pictures of wildlife or natural events to track changes, animal migration, habitat loss, or better understand environmental patterns.See the Tracey project for a similar example currently in development to monitor fish stock levels.https://medium.com/media/edaba7958eb54dc7a40e8b63217af75a/hrefBarcode shopping scannerConcept: Product barcodes can easily be scanned with a smartphone. Creating an app that records and categorises data about the type of product, where it was bought, its price, the date, etc could be valuable information for suppliers, retailers, and researchers.Data use case: Food shopping could be a good use case. The store themselves might like to know basic info about shoppers’ buying patterns by demographic and could offer rewards or discounts in exchange for this data. Other interested parties might include health researchers, fitness apps or existing barcode scanning apps such as Sugar Smart.Retailers might be able to generate a QR code on a receipt to scan, rather than each individual product. Packaging could also be scanned when they are thrown away, which could be linked to smart fridges and devices for monitoring food stock levels.Data browser profile pluginConcept: It may be possible to create an anonymous browser profile with volunteered basic demographic data, interests, and categories of ads people are open to seeing and those they are not. Websites or advertisers could offer a bid automatically for access to the information the user consents to sharing when they visit their site.Data use case: Websites get more insights into the kinds of people visiting their sites from data provided, and visitors get to control what info they share and receive payment.The ads on a website may also be able to better adapt, based on your category preferences for consented targeting (interest in sports, events, electronics, beauty etc). Website owners and advertisers can get direct insights and know their ads are being delivered to an appropriate audience.People could also benefit from this targeting by listing ads they don’t want to see, if they are trying to cut back on things like fast-food or alcohol for example, by opting out of ads in those categories.Cross-chain token balance dataConcept: A simple app or integration could be created that allows users to share their token balances cross-chain. Right now, some of this information can of course be seen on-chain, but there’s currently no visibility cross-chain or into accounts on centralized exchanges.Data Use case: For analysts, this could provide interesting aggregate market insights like “x% of BTC holders also own ETH”, as well as real-time insights such as “10% of the top whales dumped BTC in the past 15 minutes”.The ideas presented here are all early-stage concepts and require a substantial amount of further consideration and development to assess their viability, technical challenges, and ethics. If you’re up to the challenge, submit an application to the Streamr Data Fund!Learn more about Streamr: streamr.networkOriginally published at blog.streamr.network on November 19, 2020.Data Union Concept Ideas was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 19

Surveying the GDPR and Bloc...

In May 2018, the European Union (EU) devised the General Data Protection Regulation (GDPR) laws to direct companies of all shapes and sizes to follow the legal and ethical treatment of customer data privacy. In the blockchain space, there has been a general view on blockchains being incompatible with the GDPR. However, it isn’t that simple because of the diversity of use cases that exist in the realm of blockchain technology.In this article, we will explore what the GDPR is and what developers and entrepreneurs pursuing blockchain tech in this space can keep in mind while they are building businesses. As the first order of business, let’s take a closer look at the GDPR and its specifications for dealing with consumer data.The GDPR and why we need itTo simplify the GDPR, businesses are expected to treat consumer data they gather in meaningful ways to create positive customer experiences. Depending on the kind of business (startup, small business, large corporation, etc.), the GDPR compliances vary. However, some of the most common ones happen to be written to direct business to ensure that (personal) customer data is not lost, stolen, destroyed, or changed (these situations would qualify as data breaches).In today’s data-centric society, there are antagonising elements that cause harm by misusing personal data, therefore breaching trust. Political consulting firm Cambridge Analytica unscrupulously breached data privacy guidelines when they used 50 million Facebook profiles to falsely influence elections in the USA, illicitly sharing this data to benefit the 2016 Trump campaign. This highlights how consumers lose all control over what happens to their data after they share it with large corporations and that’s why the GDPR plays a pivotal role in enabling an ethical data sharing environment.In order to avoid these circumstances, businesses need to make sure that they have the following bases covered.1. Filling out Data Protection Impact Assessments (DPIAs) — this is for companies that are collecting customer data that could negatively affect individual freedoms. This includes:a. Leveraging emerging technologies (blockchains, for example)b. Processing genetic or biometric data like DNA testingc. Tracking customer location datad. Marketing to children.2. All businesses should make sure that they have a privacy policy that comprehensively explains what happens to user data. This should include contact details of the companies, explain why and how data is being collected, how long the information will be saved on a company’s database, rights of the users, details of the recipients of customer data, contact details of the EU representative and the Data Protection Officer (DPO).3. Businesses also need to prepare for data breaches and report the circumstances to supervisory authorities and customers within 72 hours.The GDPR and blockchainsWith that said, we can establish that GDPR guidelines are set to help businesses build a healthy data management practice. However, with blockchain companies, GDPR guidelines and blockchain technology face several areas of conflict due to two main reasons.Firstly, as per the GDPR, there needs to be an identifiable data controller (any entity that gathers and stores data like a business) against whom data subjects (customers) can enforce their legal rights. With blockchains, we face a conflict of interest as they are decentralized ledgers with several operating nodes, as opposed to a single entity like a company. Moreover, there needs to be consensus within operating blockchains on joint-controllership, but it could be onerous to assign responsibilities among nodes and maintain them.Secondly, the immutability of blockchains is an admirable feature that is vital for creating a trustless environment that preserves data integrity. However, in the realm of GDPR compliances, this causes some friction. The GDPR guidelines decree that the data collected by companies should have the option to be modified or erased where necessary to comply with legal requirements.Exploring solutions & the way forwardIt has been well-established that the GDPR specifications do have significant qualms with blockchains. Nevertheless, this situation is still navigable. The European Parliament conducted a study titled “Blockchain and the General Data Protection Regulation — Can distributed ledgers be squared with European data protection law?” in July 2019. In this study, they observe that efficiently assigning a joint-controller to a blockchain network will be key to preserving privacy guidelines. The role of the controller will mainly involve determining the purposes and means for the processing of personal data, with the added responsibility of complying with the GDPR guidelines.Additionally, because storing personal data on blockchains is a questionable practice with respect to GDPR compliances, storing data off-chain is a viable option. The data stored off-chain can be linked back to the blockchain with a hash to encrypt the personal data.With the above-said options being explored, with the onus on providing regulations within the blockchain space, businesses are still directed to follow the principles of data minimisation and purpose limitation. These might not qualify as all-encompassing solutions to existing discrepancies. No matter, these efforts can lead us to the right solutions. Regulators like the European Data Protection Board (EDPB) are providing funds for research into blockchain technology. Rest assured that, in time, there will be guidelines that enable ironclad data protection in the blockchain space.Originally published at blog.streamr.network on November 17, 2020.Surveying the GDPR and Blockchains to Enable Ethical & Legal Data Privacy was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 17

Upcoming European Data Gove...

Upcoming European Data Governance Act to challenge big tech’s dominance over personal data brokeringNew legislation will put data governance models such as Data Unions and Data Trusts in focus, empowering consumers and unlocking new business opportunities for EU startups.Today the European Commission has released a public draft of their proposed EU Data Governance Act. Along with other legislation, it lays the groundwork for European data policy over the next decade. The act outlines how digital services should handle data in the future, with a particular focus on consumer rights, governance and brokering of people’s data. The draft is expected to be passed to the parliament for consideration in the coming months.The draft legislation comes at a time when a new governance for regulating data ownership is urgently needed. Too many scandals in the past have shown how mismanagement of data can lead to horrendous consequences. Next to that, tech giants make billions of dollars selling user data, without sharing revenue or allowing for rich consent models. The upcoming EU data governance act is proposing new, democratic models by which data can be managed by consumers and companies alike.The EU lobbying group MyData, who issued a statement last week, ahead of the official release of the draft legislation, noted that:“We welcome the regulation as a needed common ground for clarifying the role of data intermediaries, building trust in these intermediaries and setting the direction for data governance, similar to what GDPR did for data protection. At the same time, we advocate for careful scrutiny of the articles, as the Data Governance Act will be regulating a market that is in its very early stages, with many cycles of innovation to come. Thus, the regulation will have a strong influence in the nascent market.”MyData Global’s Teemu Ropponen, General Manager, added that, “The Data Governance Act should empower consumers to be in control of their personal data and ensure they benefit from sharing data. MyData Global counts over 23 Data Operators — or data intermediaries as they are called in the Data Governance Act. As this EU legislation describes, they are crucial in making personal data management through trusts and unions the new norm. MyData Data Operators offer a vision for a human-centric internet that gives control of data back to the users as well as benefits such as services, convenience and rewards from sharing it.”Streamr is one of the companies certified by MyData as an official Data Operator. Shiv Malik, Head of Growth at Streamr, said: “The EU’s Data Governance Act lays the groundwork for a strong, trusted European data ecosystem. As an open source infrastructure builder for Data Unions and cooperatives, this new legislative framework will enable us to power ahead and support organisations that represent people, rather than venture capitalists in Silicon Valley.By ensuring that new organisations are legally obliged to represent their users’ interests when it comes to sharing and monetizing their data, this act also signals the death knell for today’s failed data brokering economy, where most companies are forced to spy on users to obtain their data in order to make money.”Beyond Europe, we are currently witnessing similar developments in the US, in particular California’s recent vote on proposition 24 during last week’s presidential elections. This result marks a big stepping stone towards data ownership. The efforts towards the proposition were led by former presidential candidate Andrew Yang and his team who are also leading the Data Dividend project which aims to share data revenue with internet users via a digital tax.Originally published at blog.streamr.network on November 16, 2020.Upcoming European Data Governance Act to challenge big tech’s dominance over personal data… was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 16

Dev Update, October 2020

Welcome to the October project dev update!This month has been busy — our long awaited Data Unions launch commenced, and we began our marketing push to developers around the world. We also announced the Streamr Data Challenge, a hackathon for India-based developers and teams. The hackathon will run over the next few months with a focus on building new data economies through Data Unions.The Streamr tokenomics model is always a hot topic for the community, so we’re thrilled to announce we’ll be having an open session with BlockScience in November to discuss exactly that. We’ll be covering what’s been done in relation to tokenomics, where we are now, and what’s up ahead. To this end, we’re asking the community for suggestions on specific questions or topics you’d like us to cover. Please reach out with your thoughts to me on Telegram or by email.And as usual, we’ve been busy working towards our roadmap goals. Here are the development highlights of the month:BlockScience team is transitioning into Phase 3 and setting up test plans to approach running the first useful simulations.Kicked off an experimental project StreamrFS, a file sharing extension on top of the Streamr protocol.Conversion of API keys to Ethereum private keys is complete, simplifying the codebase and taking a step towards decentralized identities.WebRTC now works reliably. Re-ran the whitepaper experiments with WebRTC, which verified correct operation.Working on adding the end-to-end encryption back into JS client after refactoring.Data Union 2.0 audits complete.The Network & TokenomicsTwo internal workshops were held to develop a precise understanding of the KPIs for each role in the Network. These exercises complete the final pieces of the model and allow us to transition into Phase 3 — running the first simulations of the Network.The first experiments will explore 1) The bootstrapping phase (Streamr incentivises people to run nodes to kick off the network) and 2) system resilience (how the network tolerates knockout shocks and finds a new equilibrium).Strong efficiency gains were also noticed with stream resends. If you had any issues in the past, expect it to work much better than previously with the new storage node.Core AppData streams are taking another step towards being crypto native. Currently streams are managed by our centralized infrastructure and are identified by machine readable IDs. We’ll be replacing these with human-readable stream IDs, with hierarchical structure based on ENS domains and a user-given path. So streams will start to look like mydomain.eth/traffic/helsinki for example. The Stream registry will live on the blockchain and there will be a sandbox domain where everyone can create streams without having to register an ENS name.Data Unions 2.0The team has made steadfast progress towards the second iteration of the Data Unions framework. The contracts are built, and the audits are complete. The frontend team is now exercising 2.0 functionality with the JavaScript client. The team is now working on some final tests and towards making the Java client compatible with 2.0 architecture.We’ll be setting up a test environment for early adopters to start experimenting with some of the new features of Data Unions 2.0 later this year.StreamrFS — An experimental file-sharing protocol built on StreamrFile sharing on Streamr? Yes, you read that right. While the Streamr protocol is built for real-time data, a common request from users is the ability to also share access to files, especially in the context of the Marketplace. This requirement has also emerged from the KRAKEN project, an EU-funded project where Streamr technology is used to share medical data in hospital environments.Real-time data, while powerful, is still somewhat niche in the nascent data economies, while file sharing is conceptually familiar to many, potentially making it a worthwhile idea to explore. To answer this need, an internal hackathon produced a prototype called StreamrFS. By leveraging the pub/sub messaging, storage, and encryption primitives offered by the Streamr protocol, it turned out to be quite straightforward to implement a standardised higher-level mechanism for sharing files and selling access to them. Many kinds of applications can leverage the Streamr protocol — and file sharing is one such application.In StreamrFS, files are stored in encrypted form in a 3rd party location (IPFS, Swarm, Bittorrent, etc), with encrypted Streamr messages containing a reference to that location along with the credentials needed to access and decrypt the file. By connecting access to the file with access to a stream, essentially making them the same thing, access to files can for example be sold on the Marketplace. While StreamrFS started its existence as a command-line application, it might later become natively supported by the Core and Marketplace applications.Deprecations and Breaking ChangesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.December 31st: Support for API keys will endThe ability to create API keys has already been removed from the Core App, while previously existing API keys will continue to work until December 31st, 2020. After this date, scripts and applications still using API keys may break. To avoid disruption, simply create or connect an Ethereum account on your Profile page, and pass the private key for that account to the Streamr client library in your application:JS library:const client = new StreamrClient({ auth: { privateKey: ‘your-private-key’ }})Java library:StreamrClient client = new StreamrClient( new EthereumAuthenticationMethod(yourPrivateKey));December 31st: Email/password login will be removedIf you’re still using email/password-based login, avoid getting locked out of your account after December 31st, 2020 by doing the following: Install the MetaMask wallet, go to your Profile page, and connect your wallet to your account. From there on, you can use the “Authenticate with Ethereum” option on the login screen instead of the email/password combination. In addition to MetaMask and compatible Web3 wallets, various wallet connectivity solutions such as WalletConnect will be supported in the future.December 31st: Storage becomes opt-in and freely choosableSo far, the storage node operated by the Streamr team has stored all messages in all streams for a period of time configurable per stream, with the default being one year. Going forward, there may be many storage nodes operated by different parties and located in different geographies, and more control over storage will be needed.As a stream owner, you now have control over which, if any, storage nodes you’d like your historical messages to be stored on. The controls for choosing storage nodes for your stream are already present in the Core application, although at the moment there is only one option shown: Streamr Germany, which is the storage node that has been storing data so far.By default, no storage nodes are selected, and stream owners can opt-in to storage by selecting the storage nodes they wish. Starting January 1st 2020, storage nodes may only store the streams assigned to them, and purge data for any other streams. To avoid losing any important historical data in your streams, please use the Core application to assign your streams to the Streamr Germany storage node by Dec 31st to maintain status quo and continue storing your streams.(Date TBD): Support for unsigned data will be dropped. Unsigned data on the network is not compatible with the goal of decentralization, because malicious nodes can tamper with data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Thanks for reading!If you’re a developer interested in contributing to the Streamr ecosystem, consider applying to the Streamr Data Fund for financial backing to fast track your plans.Originally published at blog.streamr.network on November 12, 2020.Dev Update, October 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 12

Introducing the new Streamr...

The Streamr Data Fund provides funds to people and projects working with the Streamr stack to create and support real-time data projects. As it stands, there are 7.5 million DATA tokens left to award to prospective builders from an initial pot of 10 million.Funds are awarded in two tiers:Quick response grants offer up to $5,000 USD and they are available for building integrations, demo apps, adding quality free data sources to the marketplace, research, other community initiatives, and other smaller projects.Project grants offer up to $50,000 USD, and they are available to fund applications, data tools, and other larger projects.To apply, send a submission using the application form on the new landing pageand a member of the team will be in touch to discuss your proposal with you and answer any questions. For larger projects, Streamr can offer guidance and feedback on how to structure your proposal before the final funding decision.See the FAQ on the landing page to learn more or you can get in touch here if you have any questions.New Data Fund awardeesSince we’ve restructured the grant structure, we have accepted two submissions: a health and wellness Data Union called Xolo, and a tutorial series that utilises the API Explorer to generate a lightweight wrapper library for the .NET ecosystem.It’s early days for Xolo — the project is in its first weeks of development and is going to be the world’s first Data Union to monetise wellness metrics, like heartbeat data. We’ll be sharing more details as the project matures.The tutorial series was funded to grow developer adoption in .NET by providing clear tutorials on how to perform some basic operations on Streamr using the API generator tool to quickly generate the necessary boilerplate code. This ties into our larger vision of creating as many data onramps as possible so that developers can integrate Streamr into their apps quickly and efficiently.Why the change in format?Formerly the Community Fund, the Streamr Data Fund has been updated and its management has been brought into Streamr. It was a pleasure to work with many enthusiastic admins from the Streamr community, but we felt this change was needed to streamline the onboarding process and make it more attractive to prospective builders.We look forward to reading your proposals!Originally published at blog.streamr.network on November 10, 2020.Introducing the new Streamr Data Fund was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 10

7 Ways Blockchains Can Enab...

In today’s data-centric age, managing data and protecting user data is a top priority. The world as we know it has the capacity to generate large amounts of data. This year, every person on earth is estimated to generate 1.7 megabytes of data, every second. To put that into perspective, that’s 1.8 quintillion bytes of data in 2020 alone. With countless files scanned, reported and analysed, big data just keeps on getting bigger, and common people who contribute to these statistics need to be protected from user data abuse. This casts a spotlight on the need for efficient data management, and the importance of enforcing measures to sustain data privacy.Data privacy aims to preserve the rights of individuals, regulate the purpose of data collection and processing, establish privacy preferences, and regulate the way organisations govern personal data of individuals. In the mix of solutions that cater to this need, blockchain tech-based solutions can make significant contributions to help preserve data privacy.Blockchain technology made its debut when Satoshi Nakamoto published the Bitcoin whitepaper in 2009. Ever since, it has traversed through several industries and adapted to fit several use cases regarding big data and data science. As a matter of fact, blockchains and data science and big data are similar in more ways than some. Data science aims at creating protocols for proper data administration, and blockchains are enabled to create trustless databases by maintaining a decentralized ledger.As such, blockchains have several applications within the big data realm that can enable data privacy and management. Here are seven ways through which blockchains can enable and perhaps even revolutionise data privacy.Blockchains as immutable decentralized databasesOne of the foremost things that we’d need to preserve data is an immutable set of records. As a database, blockchain technology can ensure that the data entered cannot be tampered with, ensuring data integrity. This is a handy application that can be applied to manage user data. Users can share their information with organisations, having full purview of the data that they shared and, to certain extents, how they are used.Preventing malicious activities with user dataBlockchains are empowered with a democratised system of implementing changes via consensus algorithms. With this measure in place, blockchain-based databases can be quite formidable for any malicious activities like hacking user data. If a hacker with malicious intent tried to hack user data, said hacker can be easily identified and expunged from the network. Given the distributed nature of the network, it would be near impossible for a single party to gather the required computational power to tamper with the validation criteria and cause any further havoc.Generating self sovereign identities on blockchainsWith several public and private organisations (like governments and corporations) looking to digitise citizen/user identities, blockchains can create self-sovereign identities (SSIs). With SSIs, people and businesses can choose the personal information they want to share without relying on a central repository of identity data. These identities can be generated and used independently of nation-states, corporations, or global organisations. This solution will help the people govern and own their personal data.Monetising user data usage among third partiesData monetisation is a great use case for blockchain tech. In this data-centric world, people can decide who can profit from their personal data with self-sovereign identities and blockchain-based models. The scope for this use case is tremendous; over 60% of the total global GDP is expected to be digitized by 2022.Personal data will only continue to increase in value.Streamr has revolutionised the idea of data monetisation through Data Unions. Service providers, advertisers, and any other organisation or individual can crowdsell their personal information on the Streamr Network, along with their fellow Data Union members, earning Streamr DATAcoins that can be spent in the real world. This is an apt example of how blockchains can be used to monetise data.Blockchain can provide verifiability to dataIn addition to storing data, blockchains can provide verifiability to user data. As a distributed and permanent record of transactions, user data can be encrypted to provide levels of access within a peer-to-peer network. This will prevent the existence of incorrect or duplicate data amongst peers within a network, verifying data that is added, authenticating the users who are present within the network.Enabling data traceability with blockchainsBlockchains can enable data traceability, leading to more transparency and traceability to combat these losses. Blockchains can employ tokens to give users unique identifiers to store information securely. This information can even be stored off-chain with reference to its unique hashing algorithm. Moreover, permissioned blockchains can give exclusive access to the information on blockchains.Enabling personalised smart experiences via predictive analysis with blockchainsUser data involving behaviours and trends stored on blockchains can be used to mould personalised smart experiences. This can help facilitate extensive predictive analyses of data to provide product/service experiences that are unique to consumers on an individual level. Moreover, to unburden the process, blockchains can provide structured data gathered from individual consumers.So what does the future of data management look like? With the need for data management and user data privacy growing, blockchain-based implementations and solutions will provide today’s data-centric-world-as-we-know-it with a means to simplify interacting and engaging in virtual environments (that’s the internet for you). The Streamr Data Union framework will open up a world where internet consumers can have seamless experiences in terms of data management and privacy by giving them the power to choose what happens to their data. This will also help shape meaningful relationships between businesses and their consumers.— —Do you think you can integrate a Data Union with your business? Or do you want to give your users the power to decide what happens with their own data?Well join the Streamr Data Challenge and make your ideas a reality. Learn more here: www.streamrdatachallenge.comOriginally published at blog.streamr.network on November 4, 2020.7 Ways Blockchains Can Enable Efficient User Data Management was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 11. 04

Meet TX — Tomorrow Explored...

Meet TX — Tomorrow Explored: helping companies transform to meet tomorrow’s data value chainTX — Tomorrow Explored is helping organisations to leverage the Streamr technology stack to engineer value from data in ways that benefit business and consumers.Consumers are becoming more and more distrustful of organisations using their data. While regulation like the GDPR in Europe is put in place to protect users’ privacy, their downside is that data is becoming even more siloed. The reality is that the tech giants are taking ownership of data produced by consumers of their apps and services and making massive profits from data sales, while little is left for the consumer generating the data. But there is a gradual shift in attitudes as consumer awareness of what happens to our personal data increases. With every passing moment, it’s becoming harder and harder for organisations to justify data sales with just the value gained from using an app or service in returnThese questions will only increase in relevance as the use of IoT devices, wearables, smart vehicles and other smart devices become more prevalent in society. Who owns the data generated by IoT devices in your home or your wearable device? By your smart vehicle? Or by the smart vehicle driven by your Uber driver taking you from A to B? Who should be the one directly benefiting from the value of this data?Organisations are looking for innovative ways to gain a competitive edge, though many are still not leveraging the value of the data they already have. They may feel powerless to utilise the data out of fear of backlash from their customers. Given that working with data can be an ethical and legal minefield, a PR risk and full of technological challenges, many are still afraid to make the moves that would take them towards becoming a data-driven business.A data revolution is coming, and the winning organisations will be the ones working hand-in-hand with their customers to create value from data in ways that benefit all parties. Data holds enormous power when shared — power that can bring undeniable value to consumers and businesses.TX can help those organisations who are ready to embrace this new approach to consumer data. We find innovative ways to empower your customer base to share and monetise their data. That’s right — for your customers to share their data, the data that belongs to them, allowing them to remain in power.Our services are helping companies to engineer data economies together with their customer base. Through use of the Streamr technology stack, companies have the opportunity to empower their customers to opt in to data sales where both the company and the individual can benefit. Streamr has already solved many of the technological challenges, such as enabling micropayments to a large number of data providers, to creating a marketplace for data where both the company and consumer share in the revenue generated from data sales. Streamr’s technology is customisable to be fully compliant with GDPR and other international privacy laws.We are tasked with how to make this technology usable to organisations and design the business models and solutions around it.We’re already doing this in multiple industries globally:We’re developing a blockchain-enabled traceability and trade data application, Tracey, for the seafood industry in partnership with Union Bank of Philippines and the WWF.We’re part of a consortium of 10 companies building a data marketplace for the secure and privacy-preserving sharing and trading of personal health and education data in an EU H2020 funded project, KRAKEN.We worked with Metro Pacific Tollways Corporation, the largest toll road developer and operator in the Philippines, to identify ways of generating value from their data to help them on the road to becoming a data-driven company.How does working with your customers to create value from data improve your competitive position?With a business model enabled by this technology, your customers are more likely to stick with your product because of the added incentives derived from data sales. This unique value-add can be the difference between choosing between a competitor product and your own.What’s more, this is not just an experiment. All of this is achieved while improving the bottom line. Once your customers give consent to selling the data, both you and the customers share in the rewards. This is a win-win. It’s a sustainable business model. It’s data done the right way.How do you get started on this data journey?We work in three stages to help our clients engineer value from data.Stage 1: AssessEach industry is different, but at their core they all have the same asset in common: data. Understanding where the value can be derived is where our work begins. In partnership, we will help you to build a digital strategy that will unlock the value of the data within your customer base. Important components of this work include:High-level review of the data and associated infrastructureDigital strategyCompliance with related privacy lawsOutline of a pilotWireframe of product proposed for pilotStage 2: PilotPilots can be simulated or conducted live with your customers. They typically range between 8–16 weeks depending on the complexity and need for bespoke app development. The Streamr standard package utilises the open and public Streamr Technology Stack. For organisations wishing for more privacy during the piloting, the Enterprise package enables piloting “behind closed doors” until you’re ready to communicate to the world.Key activities in the Pilot exercise include:Defining KPIs for measuring pilot successCustomer (user) research exerciseConducting training/onboarding for technology for pilotDeploying standard or enterprise package (this may include MVP app depending on requirements)Conducting pilot for 2–4 weeksEvaluationRecommendationsStage 3: EmbedWith the pilot complete and deemed a success, the final step is to deploy into a live environment. We will make the technology market-ready and provide assistance with installation, deployment maintenance and operations. Since Streamr is an open source technology stack, any upgrades to the Streamr stack can also be merged into your product stack even after its deployment.If you’re looking to leverage the Streamr technology stack in your business or want to discuss business opportunities around engineering data economies, please get in touch with me at rob@tx.company.TX — Tomorrow Explored is an advisory and software development company creating solutions that empower industries, organisations and people by unlocking the value of data. We advise organisations on blockchain and Web 3 technologies and develop solutions for various industries.Originally published at blog.streamr.network on October 29, 2020.Meet TX — Tomorrow Explored: helping companies transform to meet tomorrow’s data value chain was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 29

Welcome to the Streamr DATA...

Welcome to the Streamr DATA Challenge: a new way to monetize real-time dataData is eating the world.Every aspect of our digital life is generating data points, which are fed into algorithms that give us a smart experience in return for convenience and efficiency. From the traffic data on our favourite maps client to the compelling advertisement that is served to you across your social media platforms, the terabytes of data we collectively generate make this possible.But as data becomes central to these smart experiences, it becomes important to ask a few questions. Where did this data come from? Who actually owns this data? What is the value of this data? Was this data given consensually?Unfortunately, the digital age didn’t ask these questions often enough and we are beginning to see some consequences such as the disruption of the integrity and confidentiality of data and information systems, data security threats, personal data breaches etc. These consequences will only get bigger when smart appliances and eventually bioelectronics become more mainstream.Application developers have to answer these questions on data provenance, ownership, value and consent and do something about the data economy, today.Enter StreamrStreamr provides an innovative way for app developers to incentivise their users to share their real-time data. This is done in a decentralized manner and rewards are instantaneous through Streamr DATAcoins, as the system is built on a public blockchain.With Streamr, app developers are building Data Unions that allow users to sell their real-time data to willing buyers, thereby providing other app developers access to user insights in a consensual and decentralized manner.This powerful technology is already used by many interesting startups, as well as enterprises, and now Streamr is reaching out to the smart minds of the Indian subcontinent to innovate using this platform.Participate in the Streamr Data ChallengeIf you have worked on any app, at a hackathon, or as a hobby project, or even at a startup with significant users; a simple integration with Streamr can now allow you to reward your users for their real-time data in a scalable and decentralized manner.Streamr Data Challenge allows direct entry to the first shortlist if you’ve built such apps. As a part of this innovation challenge, you stand a chance to win from a prize pool of 5000 USD, with the top 20 teams guaranteed to win prizes.Be a part of this global experiment to set how internet companies handle and reward personal data.Register now and build a data business, the right way!Originally published at blog.streamr.network on October 28, 2020.Welcome to the Streamr DATA Challenge: a new way to monetize real-time data was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 28

We’re partnering with Lumos...

We’re excited to announce that we’re bringing Streamr to one of the biggest developer talent pools in the world — India. At Streamr, we’re always looking for opportunities to push innovation in the realm of data innovation and build decentralized data economies. With this vision, we’re partnering with Lumos Labs — a tech ecosystem enablement startup, to bring you the Streamr Data Challenge.Tapping into India’s PotentialIndia holds the potential to build a decentralized data economy. Presently, it is leading the global data consumption market with the increasing mobile data connectivity (3G/4G), falling data tariffs, rising smartphone penetration, and growth in broadband connectivity across India. The exponential data growth in India is projected to continue, with internet traffic expected to increase 4x from 21 exabytes in 2016 to an estimated 78 exabytes in 2021, as per the report by Omidyar Networks. At Streamr, we see an opportunity to incentivise Indian developers to leverage our platform to innovate and reinvent solutions to the biggest problems in India’s booming big data arena.The Streamr Data ChallengeThe Streamr India Data Challenge is focused on opening up India’s tech community to the problems that people face or will face with respect to data privacy, ownership, value, and most importantly — preserving their data dignity. On that note, we’re challenging the Indian developer community to join us and build solutions to real-world problems in big data with blockchain technology, by leveraging Streamr’s Data Union framework.The 4-month-long Data Challenge will also host meetups and webinars to help the Indian tech community learn about leveraging blockchain for big data problem statements and learning to build with Streamr. We will also begin with a one-week long mentor session, followed by a two-week-long intensive acceleration period. The ongoing support ranges from access to tech guidance on projects, networking opportunities, PR support and more.The winners will get a cash prize of $5,000 USD and the teams shortlisted in the first cohort will receive a $200 USD grant each.About the PartnershipOur partners — Lumos Labs, are experts at hosting open innovation programmes to encourage and incentivise innovators to push the boundaries and bring solutions to problems faced by consumers and business. We will work with them closely to create a Streamr community in India by kicking off the Streamr India Data Challenge. “We’re excited to step into the thriving tech ecosystem that India is. We’re sure that Streamr would be a well-received platform that can push innovation in the big data space,” said Streamr’s Head of Developer Relations, Matthew Fontana. “We’re excited to join hands with Lumos Labs and support the Indian developer and startup community to take on the biggest problems in big data, namely data ownership and value sharing.”Raghu Mohan — Co-founder & CEO, Lumos Labs, on behalf of his team, has expressed his enthusiasm to work with us. “We’re excited to begin work with Streamr to bring India’s tech community a new challenge with the Data Challenge,” he said. “Incentivising the Indian tech community to build solutions in this space is a huge step towards enabling innovation,” he added.— —We’re excited to see what unfolds with the Streamr Data Challenge. You can learn more about Streamr here: streamr.network and register for the Data Challenge here: streamrdatachallenge.comStay tuned for more!Originally published at blog.streamr.network on October 20, 2020.We’re partnering with Lumos Labs to bring to you the Streamr Data Challenge was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 20

News: Streamr signs pilot a...

News: Streamr partners with GSMA to deliver Data Unions to the mobile sectorToday, Streamr is announcing a partnership with GSMA, the industry body for mobile telecom communications. Streamr and GSMA have partnered to allow three mobile network operators (MNOs) to monetise their user data ethically.GSMA and Streamr will work together to deliver a technological accelerator programme to selected mobile network operators (MNOs). This initiative aims to fast track potential adoption of new technologies that permit users to share and monetise mobile device data in partnership with operators.The 90-day program, billed by GSMA as an “exciting opportunity” to pilot new privacy-centric technology, seeks to support new approaches to user data monetisation and help these innovations scale. The program is designed to allow telcos and their users to jointly access billions in new revenue from the consumer insights market, in a manner that complies with regulatory environments.The new monetisation methods are likely to receive support from Brussels in the Data Services Act next year.“Given regulatory changes, and rapidly changing consumer attitudes to both privacy and the value of their data, the only sustainable way for MNOs to monetise mobile data, is by gaining overt consent from their users.“We also know that the consumer insights industry is desperately underserved when it comes to data from mobiles. We are confident that Streamr’s revolutionary Data Union framework will allow them to capture and record this consent dynamically and securely,” said Shiv Malik, Head of Growth at Streamr.Using a smartphone app, end-users will be asked outright if they want to opt in to join a Data Union to sell their data in partnership with their network operator, and they will also be asked what exact data they would like to sell in the process. No data will be actually sold as part of the pilot.Streamr co-founder Henri Pihkala added, “It’s very exciting to consider the potential for this pilot. MNOs are ideally positioned to unlock the rich customer insights that their subscribers create on their devices each day. Privacy-focused data monetisation that works with those users, presents a significant new income stream, as the industry faces multiple pressures on existing revenues”.Immediate use cases for the pilot’s data include consumer footfall and mobility intelligence for brands, retail operators and commercial landlords, as well as for use in events management and city planning applications.The pilot will also include a significant research element, gathering user experiences on the ability to control how, and with whom, their data is shared — as well as how they feel about receiving a share of its value in the future. Learnings will also inform network user retention strategies.— -Listen to Shiv Malik, Streamr’s Head of Growth, talking about Data Unions in a recent interview with the BBC’s Digital PlanetOriginally published at blog.streamr.network on October 19, 2020.News: Streamr signs pilot agreement with GSMA was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 19

Dev Update, September 2020

Welcome to the September project dev update! Streamr has earned a bit of a reputation for ending the year strong and as we round out Q3 2020 it’s clear that this year will be no exception. Here are the main dev highlights of the month:The network whitepaper is proceeding through peer review at IEEENew storage node and Cassandra cluster is up and runningFirst round of audits complete for the Data Union 2.0 smart contractsMajor refactor of the JS Client in progressNetwork cadCAD models are working with realistic random topologies.Data UnionsThe Data Union 2.0 smart contracts have now gone through the first round of security audit with no major findings. We’re fixing some minor recommendations made by the auditors, after which they will check our fixes, and the audit will be complete.Core & Client DevelopmentThe JS client is being updated to the Data Union 2.0 era. It won’t become the official release (latest tag) until 2.0 is officially launched late this year, but it is available on npm as an alpha build for builders to start trying it out.The frontend team is busy preparing the Core UI for the transition from account API keys to Ethereum private keys. This transition is an important prerequisite for the progressive decentralization of the creation and management of streams on the Network.The Network whitepaper received positive feedback during the peer review process by IEEE. That review is ongoing, with some requests for new information that we’re following up on. The exciting takeaway here is that our results and findings were not challenged during this review, giving us even more confidence in our network design.The network team made improvements to the WebRTC implementation to reduce message latency. While certainly more complex, WebRTC has the added benefit of having mechanisms to work around firewalls and NATs, thereby increasing the chance of successful peer-to-peer connections.Our collaboration with BlockScience continues. We are nearing the completion of the cadCAD modelling phase, before diving into the incentivisation modelling. Essentially, we’re developing a digital twin of the network, to be able to simulate how various parameters affect the network’s performance and security. The models are generating realistic random topologies and we are expanding them to include the message passing level. The next step is to simulate ten nodes with realistic rules and define stakeholder KPIs.Deprecations and breaking changesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.The API endpoints for explicitly deleting data have been removed. Going forward, storage nodes will expire old data based on the data retention period set on the stream./api/v1/streams/${id}/deleteDataUpTo/api/v1/streams/${id}/deleteDataRange/api/v1/streams/${id}/deleteAllDataThe API endpoints to upload CSV files to streams have been removed. Storing historical messages to streams can be done by publishing the messages to streams normally./api/v1/streams/${id}/uploadCsvFile/api/v1/streams/$id/confirmCsvFileUpload(Date TBD): Support for email/password authentication will be dropped. Users need to connect an Ethereum wallet to their Streamr user unless they’ve already done so. As part of our progress towards decentralization, we will end support for authenticating based on centralized secrets such as passwords. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for API keys will be dropped. Applications integrating to the API should authenticate with the Ethereum key-based challenge-response protocol instead of API keys. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets such as API keys. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for unsigned data will be dropped. Unsigned data on the network is not compatible with the goal of decentralization, because malicious nodes can tamper with data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Thanks for reading!If you’re a developer interested in contributing to the Streamr ecosystem, consider applying to the Streamr Data Fund for financial backing to fast track your plans.Originally published at blog.streamr.network on October 15, 2020.Dev Update, September 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 15

Data Unions: Questions on d...

1). How are Data Unions ethically different from Big Tech, seeing as they are trading user data? Yes users can consent, but why get involved in this sordid industry at allFirstly, consent is a big deal. At the moment, the data broking industry does not work on the basis of informed and overt consent. People are pressured into signing away permissions which are themselves buried on page 34, subsection B of a 62-page contract. It’s fake consent. Just shifting that, to make the whole transaction overt and informed — what we’d call rich consent — massively changes the ethical basis for engagement.Secondly, not all data monetisation is about individual targeting. I think that’s where most of the ethical issues arise. I sell data about myself, ‘Shiv Malik’, so that advertising companies out there can manipulate me. Well, I agree, that isn’t great. But there are other forms of data that are really incredibly useful.When it comes to deindividuated or aggregated data, the ethical harms posed are quite different. If I can sell my data, anonymised and part of a much larger collective, then I’m allowing third party companies to make decisions based on collective behaviour. Assuming that data cannot be disaggregated and I can not be meaningfully identified, (granted that is often hard to achieve), my own privacy is not jeopardised.Collecting behaviour information about humanity is of course what, say, universities, do all the time. In today’s world, we all depend on other people receiving decent information about society; where to put investment, what drugs work, modelling weaknesses in infrastructure, how to improve sales, where pollution is greatest. The list goes on and on. And the markets for aggregated data — where the data buyer really isn’t interested in targeting the individual, but they do care about collective behaviour — are pretty vast. If that is a sordid business, then we should shutter modern society now and go back to living in caves.Let me give some examples:A hotel maker wants to know where to invest next — data about travel patterns would be good. That’s not the same as wanting likely holidaying individuals to advertise to. Or a city needs to know about road planning. Or Tesco wants to know about footfall. Or I want to know about local pollution. Or I’m a TV producer who wants to know what people are watching on Netflix. None of these data products require that the individuals be identified and targeted as an inherent part of the product. It might happen that they are, or could be. But the product doesn’t need to know who the individuals are and how they can be targeted to be a highly useful productFinally, you should, I’d humbly suggest, really be turning your question on its head. Unfortunately, you are already involved in this “sordid” business whether you like it or not. You are already a data product to Silicon Valley. So the question really should really be this: what are you doing about it? If your answer is to try and keep washing your hands of it all, is that really going to work? You can’t Pontius Pilot your way out of this problem.When people are buying Amazon Alexas in their millions, it’s clear that the privacy movement has failed. Simply standing to one side and calling for more legislation will not itself improve people’s privacy. And it will not stop Silicon valley from monopolising the information we all create because this is not just about an individual’s privacy. It’s also about socio-economic power. They not only suck up all the capital and cash, Big Tech effectively governs our lives because of the monopolised data they collect.We hope that Data Unions lead the way in breaking those data monopolies. By creating governance structures and organisations that ensure that professional people work on your behalf and in your interest, information should only then be licensed and utilised by people with the highest ethical standards. By the way, these ideas are not just Streamr’s — they are supported by thinkers and practitioners like Jaron Lanier, the MyData movement, RadicalxChange, and very soon, we hope, the European Union.Video link:https://youtu.be/reHOBrS7szg?t=100 2). Surely user data can still be exploited for unscrupulous means by unethical data buyers? Can users choose who their data is sold to? Why not now? When?Great question. The short answer is yes, users can choose who their data is sold to. From Streamr’s technological perspective you can do it now. We’ve just implemented buyer whitelisting so if Data Union admins want to restrict sales of the product to approved buyers, they can enable that. But for end users to have their opinions recognised, application builders must implement the buyer whitelisting mechanism at their end.For example, you can simply imagine an interface asking if you’re happy to sell your data to only a charity, charities and government organisations or anyone. At the backend this would mean creating different buckets or data products on our Marketplace. It would be down to a Data Union builder to ensure they KYC’d potential buyers and that they were whitelisted to get access to only the right buckets of data where all the users who make up that real-time data stream were happy to sell to that sort of company.So it’s easy enough to do for Data Union administrators and I believe it is something Swash is working on. It’s also something which needs to be moulded into Data Union governance. Our hope is that there is supra-national legislation to deal with Data Union governance which ensures these sorts of standards must be implemented and aren’t just a ‘nice to have’.3) Are there any types of data that are off-limits to collect?This would be a personal opinion but I think there are real issues with deeply personal data that is unlikely to change over time, for example your genetic code. Of course it turns out this is getting traded all the time anyway. But for Data Unions to get into that would disturb me, because we’re far too early into this game to know the consequences with data that is so high stakes.Otherwise I’m fairly liberal. This data only gets collected if people want to have it collected. They have to take proactive actions to ensure that’s the case, like downloading an app that specifically tells you it’s there to collect this and that real-time data. (And in return — you get paid). That’s very unlike today, where that information is being taken from you. There are literally hundreds of apps — you’re bound to have one of them on your phone — that hoover up your location data. They literally know where you sleep, eat, walk, and go to the toilet. You think you’re searching for weather or finding out where the cheapest gas is, but in fact you’re supplying deeply private information.4). Are there any general TS agreements to consider for integrating, collecting and selling data from third-party devices, such as a FitBits or mobile phones?If I understand your question right then yes. There are few companies out there that don’t defend their data silos with a ring of lawyers! Many platforms are happy for third parties to integrate their services. To do that, those third parties often need access to user data. However how that user data is then licensed for use is often set out in third party developer T&Cs. (See for example subsection h. of Spotify’s T&Cs here).However the European Commision has stated that they want to change this in two ways in the next few months. Firstly, is that they want to ensure hardware manufacturers open up their device data. Secondly they want to revamp Article 20 to ensure that everyone has realtime programmatic access to their data from any platform. Hopefully that happens sooner rather than later but when it does, that is going to be a huge revolution for Data Unions and the world.A revamped Article 20 would allow people to port machine readable data from their Netflix, LinkedIn, Google or Spotify accounts for example and allow them to send those real-time streams to a DU. Buyers for such data — stripped of personally identifying information -might be bought by production houses looking to create better TV programmes, recruitment agencies, developers looking to create better map applications, or developers/musicians looking to create a cooperative music platform alternative.5). How are regional differences in data collection laws managed?To be honest, this is still a bit of an unknown, and one for Data Union admins, rather than Streamr itself. Of course we have US and EU policy/legal experts to draw upon from our Data Union advisory network. And like with other parts of the Data Union building experience we’ll be looking to integrate basic best practice information into the general Data Union builder resource pool pretty shortly.6). Do you have any guidelines on price setting and market value estimation?Every data product is going to be worth something completely different so it’s a bit pointless trying to give guidance based on guesstimates. Of course that doesn’t mean that those numbers can’t be known. For most Data Union products, there will likely be an already existing (if nascent and under the table) market to draw pricing expectations from. We worked like that to help Swash price it’s data and we would happily work with other viable projects to find those answers.7). Can Data Unions themselves be sold (ownership transfer)?That is a REALLY good question and one that I am concerned about. As one person put it in our market research we conducted at the start of the year, what’s the point in contributing to building a Data Union if they just get bought out by Google?So getting this right is going to be a multi-pronged strategy. Firstly, from the ground up, the Data Union builders themselves need to bind themselves into the right structures. Cooperatives (and Data Unions are a sort of platform cooperative) are meant to be owned by users The issue with them is that it is always hard to raise investment from a small bunch of prospective users to the point that it can compete with larger commercial enterprises. The cooperativist Nathan Schnieder believes he has answers to this, which is why we’ve been working with him on those solutions.Secondly, Data Unions need to be regulated within the next few years. In return for legally being the only type of organisation that should be able to handle the licensing of consumer data, they should be responsive to their users and have cooperative equity structures so companies can’t be easily sold without members saying so.On the other hand Data Unions should also be subject to the same provisions as any other platform when it comes to porting data so, since they can be fairly easy to build, users should be able to port their information and streams to new Data Unions if serious governance issues do arise.Is there more that tech can bind in the equity from being captured by adversarial interests? Yes! That’s also why we are working with DAOstack and other DAO builders to get a Data Union DAO off the ground as a PoC. Now THAT is really exciting.8). How do I prove the authenticity of Data Union data?As a buyer? As we know better now, data buying does not, and is likely to never happen at the click of a button. Organisations that purchase data spend tens of thousands of dollars on it and won’t simply click a button. They do their due diligence regardless of the tech on offer. It is up to Data Union builders to ensure their data products are secure, refined, and provide clean feeds of information, otherwise everyone loses. Swash, in their latest release has improved that for example, when they introduced a Captcha button to deter bots. There are of course tools that we can integrate into the Streamr Data Union framework but for now we anticipate that the open source community will fill this gap.9). What level of support can Streamr provide in setting up a Data Union?Building a Data Union is not an easy process. Partly it’s about the tech and we are absolutely here for that part of the journey. At the ground level, we are always improving our developer tooling, video tutorials and technical documentation. Our Growth team, including our Head of Dev Rel, Matthew Fontana, is also always happy to jump on a video call to guide you through any technical issue you may be having or to just chat through a business idea. Our developer forum is also there as a repository of information for past learnings.We also have a Community Fund which can provide substantial financial support right the way from idea to the point where you get VC funding.But as our first Data Union, Swash, has grown, we’ve realised that building up your user base and also connecting with potential data buyers are also very much integral parts of building a Data Union that we also have to support. That’s why we have extra resources available including ground breaking market research and access to our Data Union advisory board who not just believe in Data Unions but who can provide advice and mentorship on; growing your user base, legal and policy issues, and negotiating data sales.10). Do Data Unions need to be open source?Being open source is important to build trust with the users of the Data Union but is not a strict requirement of Data Unions. We’re glad that Swash has done this and we will always encourage other Data Union builders to do the same.Originally published at https://blog.streamr.network on October 7, 2020.Data Unions: Questions on data selling was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 08

It’s time to build Data Unions

We can probably all agree that what’s been happening so far in 2020 has been unprecedented. From the pandemic, to wildfires, to the tensions around the upcoming presidential elections in the US — our control over the events around us seems to be slipping away, and power imbalances, whether from big corporations or political entities, are on the rise.But corporate control and political power are also a matter of infrastructure and how the systems around us are designed. In today’s turbulent times, access to accurate data is one of the biggest assets for communities and businesses. This becomes increasingly harder as big corporations silo off the information we all create on a daily basis. With the revolutionary Data Union framework, Streamr seeks to turn the current information asymmetry upside down and democratise the sharing and monetising of data flows for everyone.This is not to say that Data Unions will change all of our problems over night, but they nevertheless constitute an important building block, a tool that we’re giving to our community to start creating more open, more democratic flows of information.Data Unions are an ethical new way to sell user data, through the Streamr peer-to-peer real-time data network. By integrating into the Data Union framework, or building a Data Union app, interested developers can easily bundle and crowdsell the real-time data that their users generate, gain meaningful consent from users and reward them by sharing data sales revenue. By building Data Unions we can create open data ecosystems.In order to facilitate the building of Data Unions, we are launching the Streamr Data Challenge, a two month long Hackathon, in cooperation with Lumos Labs. During the program we invite more than 200 India-based programmers, designers and entrepreneurs to innovate and create utilising the StreamrData Union framework.We also invite builders from around the world to apply for grants through the Streamr Data Fund. Currently there are 7,500,000 DATAcoins in the fund . Head to the Streamr developer forum if you want to learn more about seed funding opportunities, or share your ideas and get inspired by other Data Union builders.Data Unions are more important now than ever. We are very excited to witness recent developments at the European Commission. These developments point to a future in which Europe will make Data Unions the new normal, rather than Google and Facebook’s oligopoly. Through the planned Data Intermediary Certification Scheme, which will most likely be introduced in a light-weight version as early as 2021, Data Unions can become official players within the data economy. This will hopefully also encourage greater political and societal interest in new approaches to democratising access to data, and a rising demand in transparent data sharing models.So, what are you waiting for? It’s time to build Data Unions!Originally published at blog.streamr.network on October 5, 2020.It’s time to build Data Unions was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 05

Let’s talk about Data Unions

Streamr’s Head of Growth, Shiv Malik recently held an AMA on Data Unions for the GAINS Telegram community. Their questions led to an insightful discussion that we’ve condensed into this blog post.What is the project about in a few simple sentences?At Streamr we are building a real-time network for tomorrow’s data economy. It’s a decentralized, peer-to-peer network which we are hoping will one day replace centralized message brokers like Amazon’s AWS services. On top of that, one of the things I’m most excited about is Data Unions. With Data Unions anyone can join the data economy and start earning money from the data they already produce. Streamr’s Data Union framework provides a really easy way for devs to start building their own data unions and can also be easily integrated into any existing apps.Okay, sounds interesting. Do you have a concrete example you could give us to make it easier to understand?The best example of a Data Union is the first one that has been built out of our stack. It’s called Swash and it’s a browser plugin.You can download it here in a few clicks.Basically it helps you monetise the data you already generate (day in day out) as you browse the web. It’s the sort of data that Google already knows about you. But this way, with Swash, you can actually monetise it yourself.The more people that join the Data Union, the more powerful it becomes and the greater the rewards are for everyone as the data product sells to potential buyers.https://medium.com/media/3a8fab401a95145f9f01212c3c021a6b/hrefVery interesting. What stage is the project/product at? It’s live, right?Yes. It’s currently live in public beta. And the Data Union framework will be launched in just a few weeks. The Network is on course to be fully decentralized at some point next year.How much can a regular person browsing the internet expect to make for example?So that’s a great question. The answer is, no-one quite knows yet. We do know that this sort of data (consumer insights) is worth hundreds of millions and really isn’t available in high quality. So, with a Data Union of a few million people, everyone could be getting 20–50 USD a year. But it’ll take a few years at least to realise that growth. Of course Swash is just one Data Union amongst many possible others (which are now starting to get built out on our platform!)With Swash, they now have 3186 members. They need to get to 50,000 before they become really viable but they are yet to do any marketing. So all that is organic growth.You can explore these numbers in more detail by downloading an executive summary research commissioned to investigate the market and consumer attitudes towards Data Unions.I assume the data is anonymised, by the way?Yes. And there in fact a few privacy protecting tools Swash supplies to its users.How does Swash compare to Brave?So Brave offers a consent model where users are rewarded if they opt in to see selected ads targeted to them from their browsing history. They don’t sell your data as such.Swash can of course be a plugin with Brave and therefore you can make passive income browsing the internet. Whilst also then consenting to advertising if you so want to earn BAT.Of course, it’s Streamr that is powering Swash. And we’re looking at powering other Data Unions, say for example mobile applications.The holy grail might be having already existing apps and platforms out there, integrating Data Union tech into their apps so people can consent (or not) to having their data sold. And then getting a cut of that revenue when it does sell.The other thing to recognise is that the Big Tech companies monopolise data on a vast scale. Data that we of course produce for them. That monopoly is stifling innovation.Take for example a competitor map app. To effectively compete with Google Maps or Waze, they need millions of users feeding real time data into it.Without that,it’s like Google maps used to be; static and a bit useless.Right, so how do you convince these Big Tech companies that are producing these big apps to integrate with Streamr? Does it mean they wouldn’t be able to monetise data as well on their end if it becomes more available through an aggregation of individuals?If a map application does manage to scale to that level then inevitably Google buys them out -that’s what happened with Waze. But if you have a Data Union that bundles together the raw location data of millions of people then any application builder can come along and license that data for their app. This encourages all sorts of innovation and breaks the monopoly.We’re currently having conversations with Mobile Network operators to see if they want to pilot this new approach to data monetisation. And that’s what’s even more exciting. Just be explicit with users: do you want to sell your data? Okay, if yes, then which data point do you want to sell?The mobile network operator (like T-mobile for example) can then organise the sale of the data of those who consent and everyone gets a cut.Streamr, in this example, provides the backend to port and bundle the data, and also the token and payment rail for the payments.So for big companies (mobile operators in this case), it’s less logistics, handing over the implementation to you, and simply taking a cut?It’s a vision that we’ll be able to talk more about more concretely in a few weeks time 😁Compared to having to make sense of that data themselves (in the past) and selling it themselvesSort of.We provide the backend to port the data and the template smart contracts to distribute the payments.They get to focus on finding buyers for the data and ensuring that the data that is being collected from the app is the kind of data that is valuable and useful to the world.(Through our sister company TX, we also help build out the applications for them and ensure a smooth integration).The other thing to add is that the reason why this vision is working, is that the current, deeply flawed, data economy is under attack. Not just from privacy laws such as GDPR, but also from Google shutting down cookies, bidstream data being investigated by the FTC (for example) and Apple making changes to iOS 14 to make third party data sharing more explicit for users.All this means that the only real places for thousands of multinationals to buy the sort of consumer insights they need to ensure good business decisions will be owned by Google/FB etc, or from SDKs or through the Data Union method; overt, rich, consent from the consumer in return for a cut of the earnings.What is the token use case? How did you make sure it captures the value of the ecosystem you’re building?The token is used for payments on the Marketplace (such as for Data Union products for example) also for the broker nodes in the Network. (we haven’t talked much about the P2P network but it’s our project’s secret sauce).The broker nodes will be paid in DATAcoin for providing bandwidth. We are currently working together with BlockScience on our tokeneconomics. We’ve just started the second phase in their consultancy process and will be soon able to share more on the Streamr Network’s tokeneconoimcs.But if you want to summate the Network in a sentence or two — imagine the Bittorrent network being run by nodes who get paid to do so. Except that instead of passing around static files, it’s real-time data streams.That of course means it’s really well suited for the IoT economy.The latest developments on tokenomics were discussed in a recent AMA with Streamr CEO, Henri Pihkala:Can the Streamr Network be used to transfer data from IoT devices? Is the network bandwidth sufficient? How is it possible to monetise the received data from a huge number of IoT devices?Yes, IoT devices are a perfect use case for the Network. When it comes to the network’s bandwidth and speed, the Streamr team just recently did an extensive research to find out how well the network scales.The result was that it is on par with centralized solutions. We ran experiments with network sizes between 32 to 2048 nodes and in the largest network of 2048 nodes, 99% of deliveries happened within 362 ms globally.To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! So we’re super happy with those results.Here’s a link to the paper.Yes, the messages in the Network are encrypted. Currently all nodes are still run by the Streamr team. This will change in the Brubeck release — our last milestone on the roadmap — when end-to-end encryption is added. This release adds end-to-end encryption and automatic key exchange mechanisms, ensuring that node operators can not access any confidential data.If, by the way, you want to get very technical the encryption algorithms we are using are: AES (AES-256-CTR) for encryption of data payloads, RSA (PKCS #1) for securely exchanging the AES keys and ECDSA (secp256k1) for data signing (same as Bitcoin and Ethereum).Streamr has three Data Unions; Swash, Tracey and MyDiem. Why does Tracey help fisherfolk in the Philippines monetize their catch data? Do they only work with this country or do they plan to expand?So yes, Tracey is one of the first Data Unions on top of the Streamr stack. Currently we are working together with the WWF-Philippines and the UnionBank of the Philippines on doing a first pilot with local fishing communities in the Philippines.WWF is interested in the catch data to protect wildlife and make sure that no overfishing happens. And at the same time the fisherfolk are incentivised to record their catch data by being able to access micro loans from banks, which in turn helps them make their business more profitable.So far, we have lots of interest from other places in South East Asia which would like to use Tracey, too. In fact TX have already had explicit interest in building out the use cases in other countries and not just for sea-food tracking, but also for many other agricultural products.Are there plans in the pipeline for Streamr to focus on the consumer-facing products themselves or will the emphasis be on the further development of the underlying engine?We’re all about what’s under the hood. We want third party devs to take on the challenge of building the consumer facing apps. We know it would be foolish to try and do it all!We all know that Blockchain has many disadvantages as well, so why did Streamr choose blockchain as a combination for its technology? What’s your plan to merge Blockchain with your technologies to make it safer and more convenient for your users?So we’re not a blockchain ourselves — that’s important to note. The P2P network only uses BC tech for the payments. Why on earth, for example, would you want to store every single piece of info on a blockchain. You should only store what you want to store. And that should probably happen off chain.So we think we got the mix right there.How does the Streamr team ensure good data is entered into the blockchain by participants?Another great question there! From the product-buying end, this will be done by reputation. But ensuring the quality of the data as it passes through the network — if that is what you also mean — is all about getting the architecture right. In a decentralized network, that’s not easy as data points in streams have to arrive in the right order. It’s one of the biggest challenges but we think we’re solving it in a really decentralized way.What are the requirements for integrating applications with Data Union? What role does the DATA token play in this case?There are no specific requirements as such, just that your application needs to generate some kind of real-time data. Data Union members and administrators are both paid in DATA by data buyers coming from the Streamr marketplace.Regarding security and legality, how does Streamr guarantee that the data uploaded by a given user belongs to him and he can monetise and capitalise on it?So that’s a sort of million dollar question for anyone involved in a digital industry. Within our system there are ways of ensuring that but in the end the negotiation of data licensing will still, in many ways be done human to human and via legal licenses rather than smart contracts. at least when it comes to sizeable data products. There are more answers to this, but it’s a long one!Originally published at blog.streamr.network on September 23, 2020.Let’s talk about Data Unions was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 23

Dev Update July, August 2020

Although summer is usually a time to take things slow, this year we decided to lean in, ship, publish, and as always, make steady progress towards our longer-term vision of full decentralization. Here are the some of the highlights:Network whitepaper published. The real-world experiments show it’s fast and scalable. This blog highlights some key findings, and you can also check out the full whitepaper.Launched the website update with an updated top page, a new Data Unions page and Papers page.Data Unions 2.0 smart contracts are now ready and undergoing a third party security audit. Remaining work consists of loose ends, such as the SDKs and Core application, as well as creating an upgrade path for existing DUs.Data Unions public beta running smoothly. Incremental improvements were made in preparation for the official launch.Started work on the Network Explorer, which shows the real-time structure and stats of the Streamr Network.Started work on human-readable, hierarchical, globally unique stream IDs with namespaces based on ENS, for example streamr.eth/demos/tramdata.Storage rewrite complete, now setting up the new storage cluster in production. Will fix resend problems and prepare for opening up and decentralizing the storage market.Token economics research with BlockScience continues in Phase 2, working on simple cadCAD models.End-to-end encryption key exchange ready in Java SDK, while JS SDK is still WIP.Buyer whitelisting feature added to the Marketplace.Network findingsReleasing the Network whitepaper marks the completion of our academic research phase of the current Network milestone. This research is especially important to the Streamr project’s enterprise adoption track, and focused on the latency and scalability of the network, battle tested with messages propagated through real-world data centres around the world. The key findings were:The upper limit of message latency is roughly around 150–350ms globally, depending on network sizeMessage latency is predictableThe relationship between network size and latency is logarithmic.These findings are impressive! Not only do they show that the Network is already on par with centralized message brokers in terms of speed, they also give us great confidence that the fully decentralized network can scale without introducing significant message propagation latency. We invite you to read the full paper to learn more.Network DevelopmentsWhile the release of the Network whitepaper has been a long-term side project for the Network team, development of the Network continues to accelerate. As real-time message delivery is the primary function of the Network, so far we haven’t focused much on decentralizing the storage of historical messages. However, as the whole Network is heading towards decentralization, so is the storage functionality. The long-term goal regarding storage is that anyone will be able to join in and a storage node. Stream owners will be able to hire one or more of these independent storage nodes to store the historical data in their streams. The completion of the storage rewrite is another big step towards full decentralization.Token economics researchThe token economics research track with BlockScience has proceeded to Phase 2. In Phase 1, mathematical formulations of the actors, actions, and agreements in the Network were created. In the current Phase 2, simulation code is being written for the first time. The simulations leverage the open source cadCAD framework developed by BlockScience. The models developed in Phase 2 are simple toy models, the purpose of which is to play around with the primitives defined in Phase 1 and verify that they are implemented correctly. In Phase 3, the first realistic models of the Streamr Network economy will be implemented.Data Unions upgradeOn the Data Unions front, development of the 2.0 architecture is progressing well and the smart contracts are being security audited at the moment. Robustness and security have been the key drivers for this upgrade, and while 1.0 architecture is running smoothly, we need to be forward-thinking and prepare for the kind of scale and growth we expect to see in the future. Data Unions 2.0 will be the first big upgrade after the launch of the current architecture. Data Unions that are created with the current architecture will be upgradable to the Data Unions 2.0 architecture once available. We look forward to describing the upgrade in detail in a future blog post.More control over your dataWe released a heavily requested feature on the Marketplace — buyer whitelisting. This feature allows data product owners and Data Union admins to be in control of who can purchase and gain access to the product’s data. These features are useful in growing enterprise adoption of the Marketplace, because in B2B sales it’s often required that the transacting parties identify each other and perhaps sign traditional agreements.Deprecations and breaking changesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.The API endpoints for explicitly deleting data will be removed on the next update, because they are rarely used and are not compatible with decentralized storage. Going forward, storage nodes will expire old data based on the data retention period set on the stream./api/v1/streams/${id}/deleteDataUpTo/api/v1/streams/${id}/deleteDataRange/api/v1/streams/${id}/deleteAllDataThe API endpoints to upload CSV files to streams will be removed in the next update, because the feature is rarely used and the centralized backend is unable to sign the data on behalf of the user. Storing historical messages to streams can be done by publishing the messages to streams normally./api/v1/streams/${id}/uploadCsvFile/api/v1/streams/$id/confirmCsvFileUpload(Date TBD): Support for email/password authentication will be dropped. Users need to connect an Ethereum wallet to their Streamr user unless they’ve already done so. As part of our progress towards decentralization, we will end support for authenticating based on centralized secrets such as passwords. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for API keys will be dropped. Applications integrating to the API should authenticate with the Ethereum key-based challenge-response protocol instead of API keys. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets such as API keys. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for unsigned data will be dropped. Unsigned data on the Network is not compatible with the goal of decentralization, because malicious nodes can tamper with data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Thanks for reading! If you’re a developer interested in contributing to the Streamr ecosystem, consider applying to the Community Fund for financial backing to fast track your plans.Originally published at blog.streamr.network on September 15, 2020.Dev Update July, August 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 15

News: Streamr appoints new ...

The Streamr project is pleased to announce that Matthew Fontana has been appointed as Head of Developer Relations. Matthew will be starting the role a few weeks before the expected public launch of the Streamr Data Union framework — where developers can build applications that enable users to control, monetise and license their data in tandem with thousands of others.“As a former front end developer for Streamr, Matthew was the perfect candidate to take on the role,” said Streamr co-founder, Henri Pihkala. “Not only does he know our technology stack inside out, and have the requisite deep understanding of crypto, he also has great presentation and teamwork skills, and has already been instrumental in the creation of the developer docs and video tutorials we have today. I welcome him to the new position and really look forward to watching our ecosystem grow under his lead.”New appointee Matthew Fontana said, “I’m excited to join the Growth team in my new role, and I’m really looking forward to inspiring developers and giving them the confidence to build on the Streamr stack. My goal is to ensure that the Streamr developer ecosystem becomes as expansive as possible. Our tech is bleeding edge and the platform is truly empowering in every sense, so I expect to be pretty busy.”Streamr’s new Head of Developer Relations, Matthew FontanaStreamr’s Head of Growth, Shiv Malik said: “As long-standing members of the Streamr project, Matthew and I already have a close working relationship, so I’m really looking forward to working with him on a day-to-day basis. One thing I’ve always appreciated is that whenever there is pressure of a deadline, Matthew has always brought a calm and quiet air of diligence and expertise to the moment. The Developer Relations role is now more important than ever. Matthew’s predecessor, Weilei Yu, created a fantastic foundation over the last year-and-a-half; helping to establish an ecosystem, a community forum, the basic developer documentation and of course helping Swash and others get established.As part of the wider Growth & Marketing team effort, Matthew will no doubt take the Streamr ecosystem, with a current focus on building out several more Data Union startups, to new heights over the next few years.”If you are a third-party developer looking to learn more about what the Streamr stack can do for you, contact Matthew Fontana via:The community forumTelegram @matthew_streamrTwitter @mattofontanaLinkedIn matthewjfontanaOr via email matthew.fontana@streamr.networkOriginally published at blog.streamr.network on September 4, 2020.News: Streamr appoints new Head of Developer Relations was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 07

How to create a Data Union

Data Unions are more than just a new data monetisation strategy- they are the beginning of a new relationship between creators and their users. This post serves as a getting started guide for those creators that are ready to get building.Data Unions (DUs) enable creators to share data sales revenue with users via crowdsourced, scalable data sets, generated by the users of their apps and services. DUs rest proudly on top of the Streamr and Ethereum stacks.Under the hood,Ethereum is used to store and transfer value,The Streamr Network transports the real-time data,The Streamr Core app is used to build and manage the DU contract, and,The Streamr Marketplace monetises the data.How to start a Data Union? — Here are the four steps:Define the sort of data you’ll be streaming to your DU.Deploy the DU contract on Ethereum.Integrate your end user app.Publish the DU on the marketplace.I will briefly explain these steps, and if you prefer, you can also get to know the process by watching me create a DU in the screencast series, or by reading the DU docs. The accompanying demonstration GitHub repo of example code can also be found here.1. Define the sort of data you’ll be streaming to your DUAs the DU creator you’ll first need to decide what sort of data will be included into the DU and how to model that data into streams. A firehose approach is typical and we have some general advice on that topic in the streams section of the docs.2. Deploy the DU contract on EthereumThis part requires some crypto basics. If it’s your first time, please check out the Getting Started section of the docs.Using the Streamr Core interface you will be customising the parameters of the DU contract such as the price of the data and the revenue share percentage.https://medium.com/media/29361d8a764e71413c161ccb1df1221a/href3. Integrate your end user appUsing one of Streamr’s client libraries is highly recommended. The essential functionality such as member balance checks and member withdrawals are wrapped in easy to use library method calls.4. Publish the DU on the marketplaceIf you’ve gotten this far, this step is a breeze. It’s a one-click publish Ethereum transaction to have your DU available for purchase on the Marketplace.https://medium.com/media/7952f4785952b88a58770e2524cf5bba/href🎉 Congrats! You’re all set. 🎉The Docs go much deeper into the implementation details and we encourage you to reach out on the developer forums to share your experience with the platform.Originally published at blog.streamr.network on September 1, 2020.How to create a Data Union was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 02

Streamr Network: Performanc...

The Corea milestone of the Streamr Network went live in late 2019. Since then a few people in the team have been working on an academic whitepaper to describe its design principles, position it with respect to prior art, and prove certain properties it has. The paper is now ready, and it has been submitted to the IEEE Access journal for peer review. It is also now published on the new Papers section on the project website. In this blog, I’ll introduce the paper and explain its key results. All the figures presented in this post are from the paper.The reasons for doing this research and writing this paper were simple: many prospective users of the Network, especially more serious ones such as enterprises, ask questions like ‘how does it scale?’, ‘why does it scale?’, ‘what is the latency in the network?’, and ‘how much bandwidth is consumed?’. While some answers could be provided before, the Network in its currently deployed form is still small-scale and can’t really show a track record of scalability for example, so there was clearly a need to produce some in-depth material about the structure of the Network and its performance at large, global scale. The paper answers these questions.Another reason is that decentralized peer-to-peer networks have experienced a new renaissance due to the rise in blockchain networks. Peer-to-peer pub/sub networks were a hot research topic in the early 2000s, but not many real-world implementations were ever created. Today, most blockchain networks use methods from that era under the hood to disseminate block headers, transactions, and other events important for them to function. Other megatrends like IoT and social media are also creating demand for new kinds of scalable message transport layers.The latency vs. bandwidth tradeoffThe current Streamr Network uses regular random graphs as stream topologies. ‘Regular’ here means that nodes connect to a fixed number of other nodes that publish or subscribe to the same stream, and ‘random’ means that those nodes are selected randomly.Random connections can of course mean that absurd routes get formed occasionally, for example a data point might travel from Germany to France via the US. But random graphs have been studied extensively in the academic literature, and their properties are not nearly as bad as the above example sounds — such graphs are actually quite good! Data always takes multiple routes in the network, and only the fastest route counts. The less-than-optimal routes are there for redundancy, and redundancy is good, because it improves security and churn tolerance.There is an important parameter called node degree, which is the fixed number of nodes to which each node in a topology connects. A higher node degree means more duplication and thus more bandwidth consumption for each node, but it also means that fast routes are more likely to form. It’s a tradeoff; better latency can be traded for worse bandwidth consumption. In the following section, we’ll go deeper into analyzing this relationship.Network diameter scales logarithmicallyOne useful metric to estimate the behavior of latency is the network diameter, which is the number of hops on the shortest path between the most distant pair of nodes in the network (i.e. the “longest shortest path”. The below plot shows how the network diameter behaves depending on node degree and number of nodes.Network diameterWe can see that the network diameter increases logarithmically (very slowly), and a higher node degree ‘flattens the curve’. This is a property of random regular graphs, and this is very good — growing from 10,000 nodes to 100,000 nodes only increases the diameter by a few hops! To analyse the effect of the node degree further, we can plot the maximum network diameter using various node degrees:Network diameter in network of 100 000 nodesWe can see that there are diminishing returns for increasing the node degree. On the other hand, the penalty (number of duplicates, i.e. bandwidth consumption), increases linearly with node degree:Number of duplicates received by the non-publisher nodesIn the Streamr Network, each stream forms its own separate overlay network and can even have a custom node degree. This allows the owner of the stream to configure their preferred latency/bandwidth balance (imagine such a slider control in the Streamr Core UI). However, finding a good default value is important. From this analysis, we can conclude that:The logarithmic behavior of network diameter leads us to hope that latency might behave logarithmically too, but since the number of hops is not the same as latency (in milliseconds), the scalability needs to be confirmed in the real world (see next section).A network diameter of 4 yields good latency/bandwidth balance, and we have selected this as the default value in the Streamr Network. This value is also used in all the real-world experiments described in the next section.It’s worth noting that in such a network, the bandwidth requirement for publishers is determined by the node degree and not the number of subscribers. With a node degree 4 and a million subscribers, the publisher only uploads 4 copies of a data point, and the million subscribing nodes share the work of distributing the message among themselves. In contrast, a centralized data broker would need to push out a million copies.Latency scales logarithmicallyTo see if actual latency scales logarithmically in real-world conditions, we ran large numbers of nodes in 16 different Amazon AWS data centers around the world. We ran experiments with network sizes between 32 to 2048 nodes. Each node published messages to the network, and we measured how long it took for the other nodes to get the message. The experiment was repeated 10 times for each network size.The below image displays one of the key results of the paper. It shows a CDF (cumulative distribution function) of the measured latencies across all experiments. The y-axis runs from 0 to 1, i.e. 0% to 100%.CDF of message propagation delayFrom this graph we can easily read things like: in a 32 nodes network (blue line), 50% of message deliveries happened within 150 ms globally, and all messages were delivered in around 250 ms. In the largest network of 2048 nodes (pink line), 99% of deliveries happened within 362 ms globally.To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! Decentralization comes with unquestionable benefits (no vendor lock-in, no trust required, network effects, etc.), but if such protocols are inferior in terms of performance or cost, they won’t get adopted. It’s pretty safe to say that the Streamr Network is on par with centralized services even when it comes to latency, which is usually the Achilles’ heel of P2P networks (think of how slow blockchains are!). And the Network will only get better with time.Then we tackled the big question: does the latency behave logarithmically?Mean message propagation delay in Amazon experimentsAbove, the thick line is the average latency for each network size. From the graph, we can see that the latency grows logarithmically as the network size increases, which means excellent scalability.The shaded area shows the difference between the best and worst average latencies in each repeat. Here we can see the element of chance at play; due to the randomness in which nodes become neighbours, some topologies are faster than others. Given enough repeats, some near-optimal topologies can be found. The difference between average topologies and the best topologies gives us a glimpse of how much room for optimisation there is, i.e. with a smarter-than-random topology construction, how much improvement is possible (while still staying in the realm of regular graphs)? Out of the observed topologies, the difference between the average and the best observed topology is between 5–13%, so not that much. Other subclasses of graphs, such as irregular graphs, trees, and so on, can of course unlock more room for improvement, but they are different beasts and come with their own disadvantages too.It’s also worth asking: how much worse is the measured latency compared to the fastest possible latency, i.e. that of a direct connection? While having direct connections between a publisher and subscribers is definitely not scalable, secure, or often even feasible due to firewalls, NATs and such, it’s still worth asking what the latency penalty of peer-to-peer is.Relative delay penalty in Amazon experimentsAs you can see, this plot has the same shape as the previous one, but the y-axis is different. Here, we are showing the relative delay penalty (RDP). It’s the latency in the peer-to-peer network (shown in the previous plot), divided by the latency of a direct connection measured with the ping tool. So a direct connection equals an RDP value of 1, and the measured RDP in the peer-to-peer network is roughly between 2 and 3 in the observed topologies. It increases logarithmically with network size, just like absolute latency.Again, given that latency is the Achilles’ heel of decentralized systems, that’s not bad at all. It shows that such a network delivers acceptable performance for the vast majority of use cases, only excluding the most latency-sensitive ones, such as online gaming or arbitrage trading. For most other use cases, it doesn’t matter whether it takes 25 or 75 milliseconds to deliver a data point.Latency is predictableIt’s useful for a messaging system to have consistent and predictable latency. Imagine for example a smart traffic system, where cars can alert each other about dangers on the road. It would be pretty bad if, even minutes after publishing it, some cars still haven’t received the warning. However, such delays easily occur in peer-to-peer networks. Everyone in the crypto space has seen first-hand how plenty of Bitcoin or Ethereum nodes lag even minutes behind the latest chain state.So we wanted to see whether it would be possible to estimate the latencies in the peer-to-peer network if the topology and the latencies between connected pairs of nodes are known. We applied Dijkstra’s algorithm to compute estimates for average latencies from the input topology data, and compared the estimates to the actual measured average latencies:Mean message propagation delay in Amazon experimentsWe can see that, at least in these experiments, the estimates seemed to provide a lower bound for the actual values, and the average estimation error was 3.5%. The measured value is higher than the estimated one because the estimation only considers network delays, while in reality there is also a little bit of a processing delay at each node.ConclusionThe research has shown that the Streamr Network can be expected to deliver messages in roughly 150–350 milliseconds worldwide, even at a large scale with thousands of nodes subscribing to a stream. This is on par with centralized message brokers today, showing that the decentralized and peer-to-peer approach is a viable alternative for all but the most latency-sensitive applications.It’s thrilling to think that by accepting a latency only 2–3 times longer than the latency of an unscalable and insecure direct correction, applications can interconnect over an open fabric with global scalability, no single point of failure, no vendor lock-in, and no need to trust anyone — all that becomes available out of the box.In the real-time data space, there are plenty of other aspects to explore, which we didn’t cover in this paper. For example, we did not measure throughput characteristics of network topologies. Different streams are independent, so clearly there’s scalability in the number of streams, and heavy streams can be partitioned, allowing each stream to scale too. Throughput is mainly limited, therefore, by the hardware and network connection used by the network nodes involved in a topology. Measuring the maximum throughput would basically be measuring the hardware as well as the performance of our implemented code. While interesting, this is not a high priority research target at this point in time. And thanks to the redundancy in the network, individual slow nodes do not slow down the whole topology; the data will arrive via faster nodes instead.Also out of scope for this paper is analysing the costs of running such a network, including the OPEX for publishers and node operators. This is a topic of ongoing research, which we’re currently doing as part of designing the token incentive mechanisms of the Streamr Network, due to be implemented in a later milestone.I hope that this blog has provided some insight into the fascinating results the team uncovered during this research. For a more in-depth look at the context of this work, and more detail about the research, we invite you to read the full paper.If you have an interest in network performance and scalability from a developer or enterprise perspective, we will be hosting a talk about this research in the coming weeks, so keep an eye out for more details on the Streamr social media channels. In the meantime, feedback and comments are welcome. Please add a comment to this Reddit thread or email contact@streamr.network.Originally published at blog.streamr.network on August 24, 2020.Streamr Network: Performance and Scalability Whitepaper was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 08. 25

Blockchain won’t solve your...

Blockchain won’t solve your traceability issues if you’re not capturing accurate data — the TX approach to understanding the problem spaceBetween 2000 to 2018, the global value of goods exported has shot from 6.45 trillion to 19.5 trillion U.S. dollars. With an increasing demand for exported products, there is also an increasing demand for accurate traceability data. According to the 2019 Food and Health Survey, nearly two-thirds of consumers said recognising the ingredients in a product impacted their buying decisions. Food labels are becoming more important than ever, as consumers increasingly seek information about the ingredients that go into their food. It’s not just the ingredients themselves consumers are starting to demand. TE Food The Trusted Food On Blockchain suggests,“The pure presentation of traceability information will shift to telling the “story of the food” in a way which the consumers can easily absorb. Attaching photos, videos, inspection documents, nutrition data will make the journey of the food more interesting”.There is huge demand for traceability solutions in a multitude of industries, including agriculture, fisheries, aggregates, and high value products such as diamonds and alcoholic spirits, to name a few. With the hype of blockchain over the last few years, numerous companies are now offering blockchain enabled traceability solutions, but have failed to improve the overall quality of data being captured and shared. Rightly so, this has led to some fatigue on the use of blockchain in supply chain cases.This is because a blockchain does not solve traceability issues. A blockchain does play a key role in traceability, as it ensures the data logged is not tampered with once it has been saved to the blockchain. But the fundamental problem that must be solved before data is entered onto the chain is its accuracy and correctness. If the data was inaccurate before it was saved to the blockchain, it will continue to be inaccurate when you come back to access it. Without verification, a blockchain serves as an immutable ledger of garbage data that cannot be deleted. The issues surrounding the quality of data must be solved before it is placed on the chain.“If the data was inaccurate before it was saved to the blockchain, it will continue to be inaccurate when you come back to access it.”This is where we do things differently at TX Tomorrow Explored. Our services commence with an activity we refer to as an Assess study. We want to offer our clients more than a software solution, we want to consider the space around the technical problem to understand all the pain points before making recommendations. To make this possible, we have structured our services in such a way that we first analyse the problem space so we can get to the heart of the issue, before we start discussing the software solution. This approach is a result of our team composition. The Assess phase is delivered by a combination of business consultants, service designers and developers. This means that in our analysis, we truly consider both the technical and business aspects of the problem. Issues like the one mentioned above concerning verification are more likely to be identified when analysis is done from a variety of angles — industry, business and technical.“Issues like the one mentioned above concerning verification are more likely to be identified when analysis is done from a variety of angles — industry, business and technical.”In one of our signature projects, Tracey — a traceability and trade data application used by fisherfolk in the Philippines — we performed an Assess with our partners at WWF, UnionBank and Streamr. In the Assess, we identified the need for a verification solution to support the use of a blockchain-enabled ledger for capturing and disseminating catch information. As a result, Tracey includes functionality that provides fisherfolk with incentives for providing data that has been verified. This makes Tracey a more complete solution for gathering data in the “first mile” of the supply chain. The solution can be applied in other industries that face challenges in ensuring the accuracy of data captured in the first mile.Additional content: Listen to TX Podcast with UnionBank on how Tracey is helping unbanked fisherfolk gain access to microfinancingWhat does the Assess involve?Let’s assume you have identified a problem or business opportunity that needs to be addressed with a traceability solution. The first thing we need to do is validate this hypothesis with an Assess study. This low cost activity will give you some level of educated reassurance that a traceability solution is going to solve and deliver the benefits you desire before investing too heavily in a particular software. The Assess work can last anywhere from 2 to 6 weeks depending on the complexity of the project. We do this work in close collaboration with our client — in addition to validating and better understanding the problem, we also want to ensure there is a close alignment on the end vision and objectives as we work through the process. The Assess activity includes:Data Value Chain Analysis: This is the main activity that involves conducting primary research in the form of focus interviews and workshops with the actors in the value chain, coupled with a desktop research investigation on factors surrounding the value chain, such as compliance laws for exporting products and other relevant restrictions that need to be complied with. Undertaking this activity allows us to build a picture of the problem space like the one in figure 1 below.Digital strategy: This means how we transition from what you have today to what you’re aiming to have at the end of the project. Effectively, this is a roadmap with key activities identified, which will guide you through the problem space to achieving your objectives.Decision gateway: This is an open discussion on the pros and cons of introducing a traceability solution, and whether the business case is realistically viable for your organisation.All things being equal and we agree to go ahead, we’ll complete the Assess with an outline description of the recommended pilot, a wireframe of the product and a program for testing.We work using agile methodologies adopting regular sprints and retrospectives throughout the delivery process.The illustration below is an example where one of our business consultants conducted an Assess study on a handline fisheries value chain in the Philippines. The key activities throughout the “bait to plate” value chain were researched and surveyed, with consideration given to the main actors, tasks, data collected and disseminated, and legal compliance.Handline Fishery Value ChainOnce the Assess phase has been completed, we move into the Testing phase, which is followed by the Embedding phase. Testing can last anywhere from 8 to 12 weeks depending on the complexity of the app needed. We would always recommend keeping the software solution to a Minimum Viable Product to keep things cost-effective during this period of testing and evaluation. Once some tangible results are retrieved from the Pilot, then the software solution can be improved and optimised, ready for full roll-out in the Embed phase.We call the final phase Embed, because there’s far more to implementing a good technology solution than simply handing it over. Sometimes changes to processes are needed, training for staff, integration with other technologies etc. Whatever it is, we’ll try to work alongside you throughout the process, until you’re satisfied that the solution is suitably embedded into your organisation and supply chain.How we workIf you’re interested in undertaking an Assess for your organisation, please contact us at TX.company/contact.Originally published at blog.streamr.network on August 19, 2020.Blockchain won’t solve your traceability issues if you’re not capturing accurate data — the TX… was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 08. 19

Decentralization may fix th...

Can’t Touch ThisWarren Buffett was once asked why he owned a 5% stake in The Walt Disney Company. In his reply, Buffet drew a comparison between Mickey Mouse and the human actors that are on the payroll of most movie studios: “It is simple, the Mouse has no agent”.Buffet reasoned that once you have created the first unit of an intangible asset, every unit after that costs next to nothing to produce. As you draw Mickey Mouse, you generate an infinitely scalable asset, a feature unshared by human actors. Thus, while other competitors in the entertainment industry had comparable revenues, their profits were thinner due to the costs associated with paying the cast, the agents, and directors.Software is the same thing; it may be expensive to develop, but to multiply subsequent lines of code is a copy and paste process. If you were to sell computer hardware, on the other hand, every additional unit would require extra materials and labour.The scalability feature of intangible assets, such as intellectual property and software, unlocks exponential growth and global reach because it allows organisations to escape the positive relationship between output and total cost of production.Unfortunately, or luckily, we cannot softwarise everything. So how would you reach global presence in an industry that requires hardware to run operations, like transport or hospitality?Here’s the pitch:Because tangible assets are not quickly and cheaply scalable, I am not going to invest in them. My workers will.Be Less the new More (For More we cannot afford)The rise of the sharing economy can be attributed to the minimalist movement, and the rise of the minimalist movement can be attributed to millennials being broke.Sharing economy firms have been successful in creating a better experience for the consumer: from an easier, faster booking experience to review systems that incentivise quality maximisation. Nevertheless, the success of these companies is largely due to more competitive pricing. But cost reduction hasn’t been achieved through the joint or alternating use of a resource that would be sitting idle or underexploited. Costs are merely transferred to the gig workers.Take ride-sharing apps. These companies take roughly one-third of ride fares for providing a booking platform, while the driver has to shoulder the car purchase/lease, maintenance, gas, washes, insurance, social security expenses and labour risk — which, in times of COVID, is no small factor. As political economist Robert Reich puts it:“The big money goes to the corporations that own the software. The scraps go to the on-demand workers.”Make it ‘till you Fake itSince software is very easy to scale, it is also to replicate. Intangible assets tend to create spillovers, and firms realised that to spear away competition they should become as big as possible as quickly as possible. While this strategy might work, coming up with a unit economics that holds true only in monopolistic conditions is not going to be labelled as savvy business management. But these are strange times, and investors are willing to keep unprofitable companies on life support high on a cocktail of loss aversion, excess capital and a perverted love for the founders.As these firms reach the juggernaut status without any trace of profitability, the narrative keeps its pendulous motion between “We are good because we bring more wage-earning opportunities to more people” and “we expect to be profitable within one year” to maintain the pretence of a functioning business model. In this fabrication, the winners are the executives, with their obscene compensation that can be solely justified by fundamental attribution errors (FAE), and the investors who passed the ticking bomb to a greater fool. Unsurprisingly, the parties that are better off are the unproductives.Tech, tech-enabled and tick-tacksThe sharing economy’s value proposition is nothing newfangled. A p2p network system that connects offer with demand is intuitively a good idea. In an analogue fashion, we have been doing that for the past 8000 years. Without much originality, the business model was executed through a U-form corporate structure, which implies central authority and coordination, as well as data processing.Example of U-form business structureThe centralized structure provides an attractive benefit in enabling the fulfilment of the company’s vision through a clear chain of command. Innovation, for instance, is an exercise that requires some degree of centralization. In her book Quiet, Susan Cain argues that brainstorming with a large group of individuals tends to have a levelling effect on creativity because the initial goal of conceiving great ideas quickly turns into reaching a consensus among participants, thus diluting the quality of the concepts.In other words, if you are in the business of doing things differently you shouldn’t give too much weight to the opinion of the crowd.“If I had asked people what they wanted, they would have said faster horses.” (Attributed to Henry Ford)Sharing economy firms brought some change, but their type of innovation is more of a one-off. These firms are not in the business of continuous innovation in the way pure tech enterprises are. Still, they like to market themselves as such because on Wall Street “The new thing” never goes out of fashion ( adding a premium to a stock price is, sometimes, as easy as adding “Technologies” or “Blockchain” to the name of the organisation ). Yet under the surface, these firms are tech-enabled service companies. At the core, they are exchange systems. And exchange systems work better without a central processing unit.While distributing creativity is a bad idea, distributing big data processing is a good one. Because the selection of experts or expert committees happens in a deterministic manner, their analysis may be more subject to political forces and other biases. Further, the greater headcount gives crowds a unique advantage when it comes to finding equilibrium points.History has plenty of examples of data processing gone wrong because of centralization. To take a relatively recent one: when bread prices are set by a nepotistically selected group of government officials, people line up for the crumbs. When market forces are free to set prices, bread lines up for people.The NetworkTo the on-demand worker, software in the sharing economy equals the latifundia in Medieval Europe: access to it is subject to the reverence of the holy software-owner and the acceptance to have no voice and little to no rights.A fairer system may be a leaner system, scrubbed from the old business mould. Regrettably, in the early twenty-first century, the corporate apparatus comes with the software just like the medieval landowner comes with the field. While the development of the end-user application can be left to anyone as long as the right incentives are in place, engineering a global back-end infrastructure for data transfer is a way more complex task.To substitute an enterprise messaging system you would need a universal, secure, robust, neutral, accessible and permissionless real-time data network. This should provide on-demand scalability and minimal up-front investment. There should be no vendor lock-in, no proprietary code and no need to trust a third party with the data flowing through the network. This system should also integrate smart contracts, which are self-executing and have the terms of the agreement written into lines of code. Taking again the example of ride-sharing: GPS data points may funnel down to the smart contract to assess whether the ride has been completed according to the path set forth when the service was booked, thus releasing the ride fees to the driver only if the contractual obligations are met.If you think this is a lot to ask, the Streamr Network ticks all the boxes.It should be said that the Network’s full decentralization will be achieved in the next stage of development. Nevertheless, as of today, it is up and running like a charm.2.0The new organisation requires pioneering rules to be coded in. Among the many: the logic by which the smart contract rewards the operators, how ownership is distributed and diluted, which assets claim ownership and how governance takes place. These questions have no right answer and require some rational extravagance in exploring these uncharted territories. Research in the area, albeit limited, is growing steadily and is attracting an increasing number of professionals and devotees from all disciplines.Idea for a Decentralized Sharing Economy FirmAs regards exchange systems, the argument that a centralized solution is always better is a weak one. While the execution risk in decentralizing the sharing economy is quite high, decentralization is a much stronger proposition because on top of disintegrating agency costs it enables its working community to own the business as they earn their wages.Asset ownership is key to the effort of reducing the inequality gap. The money printer will not stop going Brrr and the currency in your pocket is still just colourised paper. What this means is that if your only source of income is your salary, then there is some probability that you will only get poorer.If monetary economics is not your cup of tea, I invite you to check the chart of the S&P 500 or any Real Estate Index from 1971, then check the trend of real wages starting from the same period.Wrapping UpThe new value models that are emerging from the convergence of the economic market system with information technology may terminate the traditional divide between the owners of capital and the workers. The Sharing Economy, essentially a collection of exchange systems, could be among the first to evolve from the old Industrial Value Model to a Distributed Value Model, where the right mix of technology and humanity can unleash greater economic potential.Decentralization may fix the Sharing Problem of the Sharing Economy was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 08. 13

You have the Right to a Dat...

Imagine having a personal diary where you write about your life. Now, imagine owning that diary and not being able to enforce your right to possession. It’s yours, and yet you can’t have it.How did this happen?The only Free Cheese is in The MousetrapImagine further that, when you shopped for the notebook, the stationer said you could have it for free. He then mumbled something about the possibility of recording, accessing and using your writing for anything he liked.Because your persona belongs to the underappreciated world of intangible assets, you couldn’t grasp what was at stake there. Au contraire, the stationer, let’s call him Mark — you know where I am heading with this — eagerly recognised that you had handed him your psychological profile and sentiment live update. You entered a transaction with highly asymmetric possession of information. Now the stationery store is selling copies of your diary to any willing buyer interested in reading it and knowing your inner thoughts, without your knowledge.Want to read this story later? Save it in Journal.And for Mark, it’s raining billions.You, just like me, got grifted by the folks selling free blank canvas for your thoughts, under a “Make the world a better place!” neon-sign. We believed them because they looked so relatable in their shabby sweaters. These are the good guys, we thought, the ones that say: “stay weird,” “do what you love,” and “don’t be evil.” After all, they are not THE BANKERS, those villainous souls living in greed-cladded skyscrapers, relentlessly concocting world domination.Then came Theranos, Uber’s self-driving cars, WeWork, Cambridge Analytica, the #DeleteFacebook movement, and suddenly we realised that the grass in Start-up Valley was greener only because it had been fertilised with bullshit.Mark’s SerendipityWe hectically scroll down the privacy policy form and feel relieved at the sight of the accept button. As cookie policies spook the internet, our adaptive unconscious is developing an automatic “Find & Click” response. In a mechanistic manner, we are removing decision points, leaving the cockpit of our digital life in the hands of our automatic behaviour patterns.Terms and conditions constantly change because the way personal data is being harvested is constantly changing. In 2004, a young man creates a website rating girls by their appearance. Fast forward a few years and he’s holding the greatest heap of personal information ever assembled by humanity. That happened fast, but not overnight.Most internet services chose to be free to align with the internet value of Universal Access. Asking for money, apart from not being cool, means also fencing people out of your digital backyard. Unfortunately, being cool doesn’t help to keep the light on, and ads started frothing here and there.Advertising went from a gut-based discipline to scientific method entailing constant theorising and testing. The goal is no longer to conjecture, while chewing a cigar, what your customer may like. In the brave new world, the objective is to create an ever-increasing framework of understanding around your target audience that helps you reinforce the message in a recursive fashion. For that, you need the help of a sexy woman called A.I. along with her favourite food: data. A lot of it.Source: Rhett Allain, WIREDThe internet is the barn saving algorithms from famine. Everything we do online leaves a trace. The data we produce defines, in part, who we are, what we like, with whom we exchange information, what and who we care about. Aside from Time, Body and Mind, your identity is arguably the most valuable asset you own.The ruling economic thinking of this age says, in Milton Friedman’s words, that “No-one takes care of somebody else’s property as wisely as he takes care of his own.” Why then, is a society so hot about private property fine with giving away the most personal of properties?Because we had no choice. Today, the last bastion protecting the exploitation of personal data stands upon the belief that there are no alternatives to centralized data retailing.We are starting to see some cracks.Break Free From The Mousetrap and They Will FollowThe General Data Protection Regulation (GDPR) defines the right to Data Portability as:“the right to receive the personal data provided to a controller […] It also gives the right to request that a controller transmits this data directly to another controller”Which is Latin for: what you do on social media, what you search on your browser, what you listen to on your phone, essentially all the data you produce using a service or a device, is yours. And just like a paper diary, you can take it wherever you want, even to a marketplace. The question is how to turn a piece of legislation like the GDPR into actionable rights.In 2000, Chris Downs collected his on- and offline data and sold it on eBay for £150. It took him a few months and 800 pages to put everything together.Chris’ printed dataYou can see how this is not scalable. Further, on its own, our data does not hold much value, but when combined, it aggregates into an attractive product for buyers to extract insights. This is the idea underpinning Data Unions.A Data Union is a framework, currently being built by Streamr, that allows people to easily bundle and sell their real-time data and earn revenue.Here’s how it works: a developer creates an application that collects data from a multitude of data producers. What kind of data is open to imagination: Swash collects browser searches, MyDiem gathers phone’s apps usage, Tracey records fishing data.MyDiem’s user interface under developmentThe anonymised data bundle travels on the Streamr decentralized and cryptographically secure p2p Network and gets sold on the Streamr Marketplace. The profits, shy of the developer/admin fee, are distributed among the Data Union participants through a smart contract.Data Unions model from StreamrThe GDPR and Data Unions give you back choice, control and empowerment, but they are not going to take the Data Cartel out of the picture all at once. Even if you install the Swash plugin, Google will continue to monetise your searches.Nevertheless, when you decide to own and sell your data, you are remoulding a monopoly into a polyopoly. Which is Greek for a market situation where there are many sellers and many buyers. Competitive forces in this market typology are highly effective in suppressing control centres and providing better transparency and inclusion. According to a paper from the United Nations, “Facebook or Google would lose monopoly powers if the data they collect were available to all interested parties at the same time.”As long as we are human, we will be vulnerable to persuasion, thus no technology will give us immunity from practices designed to engineer consent. Advertising is not intrinsically evil, yet the Cambridge Analytica scandal showed that there is a veil under which the most treacherous of these practices tend to flourish. The Data Union promises to lift that veil by giving you your share of the profits and by opening the market to a symposium of sellers and buyers, where nobody holds control over the other. If the market stops providing monopolistic benefits, the only way for the data exploiters to survive will be to stop the exploitation and comply with the new rules of inclusion.The Data HypenormalisationIn the Soviet Union of the 70s and 80s, everyone witnessed the crumbling of the system, but no one could imagine or dare to propose an alternative to the status quo. Everyone played along, maintaining the pretense of a functioning society. The wintriness of a mock-up system was accepted as the new reality. This effect was termed “Hypernormalisation” by the anthropologist Alexei Yurchak.From the BBC Documentary “Hypernormalisation” by Adam CurtisIn today’s Data Economy we are walking a thin line between a global, secure, accessible, neutral network of information and a dystopia where, without free-floating market prices, a few firms are capturing all surplus values created by data — your data — and they are amassing unprecedented wealth. Today’s Data Economy is not a free market. With few specialised firms anchored to their data-product niche, the system is highly uneven, featuring asymmetric bargaining powers. It is a system designed to benefit the few at the expense of the many.Ignoring the abuse that is being perpetrated on your digital property is like hurrying down a steep staircase with your hands in your pockets.You may survive, but you are not going to win a medal for smartest guy in the room.If you are interested, I encourage you to check the latest news from Streamr📝 Save this story in Journal.👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.You have the Right to a Data Income was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 07. 21

Dev update, June 2020

This is the Streamr project dev update for June 2020, welcome! Here are the highlights of the month:Data Unions framework now available in public beta!Finished a proof-of-concept for a next-generation DU architectureNetwork whitepaper being finalised, should be out in late July/early AugustCompleted Phase 1 of token economics research with BlockScienceEnd-to-end encryption with automatic key exchange now 80% readyData Unions in public betaThe Data Unions framework, which was in private beta since October last year, is now publicly available to everyone. So what’s new?There are new docs, detailing the steps to create Data Unions and integrate apps to the framework.The “Create a Product” wizard on the Marketplace now includes the option to create a Data Union, instead of a regular product.For Data Union products, the product view now shows various stats about members, earnings, and Data Union parameters.DU products also expose an admin section, where DU creators can manage members, app keys, and such.The JS SDK ships with easy accessors to the DU-related methods, making integration a breeze for JS-based platforms. The Java SDK will get support before the official launch, and even if you’re working on a platform with no official SDK just yet, integration to the raw API isn’t hugely complicated, although it does require some effort (and we’re happy to help).The beta is feature-complete in terms of the fundamentals. The purpose of the beta is to expose any remaining issues before the framework is officially launched after the summer. We’ll be expanding the SDK support as we go, as well as creating other useful tooling for Data Union admins, such as scripts to kick out inactive members and to implement custom join procedures (a Data Union might want to include a captcha to prevent bots from joining, etc.).Upcoming Data Unions architectureWe have already started working on the first major post-release upgrade to the Data Unions framework. In the previous update, I mentioned we’re working on a proof-of-concept, and this task has now been completed.We’re calling this upgrade Data Unions 2.0, communicating a major version bump with an improved architecture. In contrast to the current architecture built around Monoplasma and its Operator/Validator model for scalability and security, Data Unions 2.0 will feature an Ethereum sidechain to contain the Data Union state fully on-chain, with the POA TokenBridge (with AMB) connecting the sidechain to mainnet.All the current Data Unions will be upgradable to the new infrastructure once it’s ready later this year. While the proof-of-concept has been completed and we’re now committed to this approach for the next upgrade, there is still plenty of work to be done. A blog post detailing the upgrade will be posted in due courseNetwork whitepaperThe whitepaper, detailing the Corea milestone of the Streamr Network, is almost ready. The experiments are complete, and we’re working on the text to accompany the results. In June, the work snowballed slightly, as we realised we need to prove the randomness of the generated network topologies in order to relate to some earlier literature, but that hurdle has thankfully now been crossed. If no further obstacles are encountered, the paper should be ready by the end of July or early August.Phase 1 with BlockScience completedWe’ve reached the end of Phase 1 in the token economics research project with BlockScience. The Phase 1 deliverable was a document containing mathematical formulations of the objects and rules in the Streamr Network.In Phase 2, we’ll start the actual modeling process, in which the first, simple simulations of the Streamr Network token economics are built using the cadCAD framework. In future phases, the models will be further refined and iterated, and those models will inform our decisions about the future incentive model.End-to-end encryption with key exchangeStreamr has had protocol-level support for end-to-end encryption for a long time. It’s also been implemented in the SDKs as a pre-shared key variant. This is a simple implementation that relies on each party’s ability to communicate secrets outside the system, over another secure channel. The downside of the pre-shared key approach is that the publishing and subscribing parties need to know and contact each other in advance before they exchange encrypted data.We’ve recently been working on a key exchange mechanism that happens directly on the Streamr Network to securely communicate the keys to the correct parties. This makes end-to-end encryption effortless and automatic for all parties involved. This is very important, because end-to-end encryption is obviously a requirement for decentralization; nodes in the network will generally be untrusted. And usability shouldn’t be sacrificed for security — the automatic key exchange achieves both.Looking forwardJuly and August will be the epicentre of the team’s annual holidays, and we’ll be producing only one dev update over this period, due in the second half of August. However, over the next couple of months you can also look forward to dedicated posts about the Network whitepaper and the Data Unions 2.0 architecture.A summary of the main development efforts in June is below, as well as a list of upcoming deprecations that developers building on Streamr should be aware of. As always, feel free to chat with us about Streamr in the official Telegram group or the community-run dev forum.NetworkWhitepaper making slow but steady progress, should be ready in late July/early AugustEncryption key exchange 80% ready in both JS and Java SDKsDiscovered an issue where a tracker gives a node more peers than it should, working on a fixWebRTC issues still being investigated, opening an issue with the library developersStorage refactor in PR, working on data migration toolToken economics research Phase 1 completedSupport for old ControlLayer protocol v0 and MessageLayer v28 and v29 dropped, as previously communicated in the breaking changes sectionData UnionsData Unions framework launched into public betaAlerts & system monitoring improvements to detect problemsData Unions 2.0 proof-of-concept successfully completedCore app, Marketplace, WebsiteTerms of use, contact details, and social media links added to Marketplace productsWorking on a website update containing updates to the top page, a dedicated Data Unions page, and a Papers page to collect the whitepaper-like materials the project has published.Stream page now shows code snippets for easy integrationDeprecations and breaking changesThis section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ are known to happen in the medium term, but a date has not been set yet.(Date TBD): Support for API keys will be dropped. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets. Applications integrating to the API should authenticate with the Ethereum key-based challenge-response protocol instead. Instructions for upgrading from API keys to Ethereum keys will be posted well in advance of dropping the support for API keys.(Date TBD): Support for unsigned data will be dropped. Unsigned data on the Network is not compatible with the goal of decentralization, because malicious nodes can tamper data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Originally published at https://blog.streamr.network on July 14, 2020.Dev update, June 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 07. 15

You shouldn’t sell your kid...

You can’t sell your kidneys! Responding to objections to data ownershipIn May, I published a lengthy essay on why, for ordinary individuals, privacy was dead and how a framework of data ownership would provide not just more privacy, but also much more data sharing, economic equality and dignity for the billions of people who use the internet.The essay sparked a fairly passionate response on social media from those who advocate for a privacy-centric world. I had anticipated more than a bit of blow-back, especially because despite the first essay’s length, many points about the ownership model remained unanswered. That is my failing, which I hoped to rectify with this second essay. I’ve tried to whittle down the objections to seven points. No doubt there are more but these seven seem to be the most crucial to answer.1. Data monetization will hinder the open data economy.The argument here is that if the goal is to ensure data is shared most widely, for everyone’s benefit, then it needs to be free. As soon as you put price tags on data, then it will inject “ enormous friction into free flow of information.”At first glance, this sounds like it should be true. Paying for stuff is a friction — not paying is frictionless. But this misses a bigger economic insight. Apply the same argument to bread. If we say bread must be free for all to utilise, and the state must ensure all bread producers make their bread free for all to utilise, (which is what Open data campaigners are ultimately asking for with data) then far fewer people would have bread (that’s a pretty big friction!). Why is this so? Simply because there would be no incentive to produce bread. People may argue that data is an effective side-product of other activity. But that is far from clear. In fact, as Streamr’s sister company and WWF are already discovering, incentivising the production of data turns out to create very original and necessary products.At its most fundamental level, money is actually a communication tool. Removing money from data means there is no common protocol for sorting good and bad products. Money allows us to say, “my toaster is worth 162 of those apples, 12 pairs of socks and 73 ballpoint pens” all at the same time. A well-priced market for data will therefore sort the good from the bad and end the under-the-table economy which currently exists for user-generated data. By putting a price on data, you should actually see more of it being exchanged and distributed.But what about those data sets that should remain free because there is a social good involved? Introducing money devalues social giving? Well, why don’t we leave it to ordinary people (who create that data) to decide whether they want to share what they own freely or not? By insisting that data should not have a price, those who want Open data are effectively insisting that money should be replaced with laws to enforce its distribution. It is a busted model at best. And for anyone with libertarian instincts, a dangerous one at worst.2. Trading data will kill privacy further.The argument often made about devaluing privacy by trading it, is about commodifying a right. It’s about what goes on in people’s minds. To put it bluntly, if you turn data into property and give people monetary incentive to sell, then really you’re bribing them to forgo their privacy.In the original essay I argue that people with ownership rights over their data will have far more legal and enforcement leverage to obtain whatever outcome they desire: a vast improvement over the current scenario where people are forced to beg FAANG or their governments for just one outcome — privacy. Those points in and of themselves should answer this critique because in the round, with data ownership, people will have more choice over what happens to their data. But there are several other retorts to deploy here that answer the bribery point more directly.Firstly, people are likely to imbue data with more worth, not less, if they own it. This is a well-studied behavioral economics phenomenon termed the endowment effect. The phenomenon in aggregate could be far, far larger on people’s mindsets than anything privacy campaigners could muster in terms of public education.Secondly, monetisation allows people to better judge precisely what they are forgoing in terms of their privacy. Not every piece of information I generate is equally precious in terms of the integrity of my identity in the public sphere. I care when others compile lists of who I emailed or texted today. I don’t care so much when it comes to revealing what songs I listened to (though I would of course care if that data set can be cross-referenced so as to reveal the first).Currently privacy Puritans ask people to get involved in deeply technical or political fights with both governments and companies in order to resist all intrusions. That’s the only weapon of resistance they can offer, and for most people it is a near impossible drain on their time and abilities. And it’s this impossible ask, which devalues their privacy more than anything: because it is too difficult to protect what is precious, people end up giving up on all of it and their privacy becomes entirely worthless by default.So why not put a value on it, and ask people to figure out those decisions for themselves? I’d bet that if an advertising agency or a hedge fund asked to pay $20 to listen in to people’s conversations each month, the vast majority would give it some hard thinking before saying yes or no. Because people are so powerless to begin with, they barely think about it at all. Putting a price on privacy helps people determine what to them is valuable and what is not. Given where we are at the moment, that’ll very likely mean a lot more information will remain private or priced so high (in aggregate) that said information no longer makes commercial sense to purchase.3. Turning rights into commodities harms the poor the most.But what about the poor? Those people for whom $20 from an advertising agency is a week’s wage? Won’t they be turned into data producing machines, each click generating more money for them but vastly more for the companies utilising the data? Won’t this set-up reinforce existing inequalities rather than mitigating them? What if people are tempted into selling all the rights to their genetic code? If you’re not careful, the warning goes, this becomes analogous to setting up a market in body parts where the poor are enticed to sell their kidneys. This dystopian vision is vividly laid out by Valentina Pavel here.To sincerely believe that these nightmare scenarios will come true, you have to take a few deft mental leaps and reduce your model of ownership to the most simplistic notion of property that exists. I own this lumber. I sell it to you. You now own it and I have no claim. End of story.But of course, property transfers encompass a far broader spectrum of models. There’s a reason why it makes up nine-tenths of the law. When transferring data as property, Data Unions, who act as mediators of people’s data, will likely adopt leasing rights more akin to authorship rights than simplistic property rights. The academic Maria Savona has begun to argue this out. Leasing is of course only slightly more complicated as an ownership structure, but it means that professional bodies (data buyers and Data Union administrators) can come to terms with how property is utilised and in what way. This happens in the real world all the time, every single day. To argue that it can’t happen with data (it already does), really is wilful blindness.And yes, hands up, we’re going to need legislation to stop unscrupulous players, and to establish healthy relations between a union’s managers and its owners. Excitingly, this is something that is already being worked on by RadicalXChange and is also being discussed by the European Commission.And maybe, too, rather like the housing market, the sale of such property will be regulated to the point where individuals will find it difficult to simply sell off their personal data without employing an agent (like a Data Union) to act on their behalf.But there is a second element to this counter-argument to data ownership which deserves teasing out. Usually these arguments come from those who model society as an interaction between three parties: the state, atomised individuals and big tech. But this is a desperately hollowed view of what society actually is. And it’s one which way too easily forgets what civil society actors like labour unions, mutual savings and loans banks and credit unions are doing for the position of the poor. By collectivising interests, those institutions improved, not further immiserated, society’s most disadvantaged people. Why wouldn’t they act in the same fashion for the poorest when it comes to the data economy? In our nearly realised world of Data Unions, the brokering of terms of sale does not take place between an individual and a tech giant. That world would indeed be a rapacious one for the individual to navigate. Instead these sales take place through a mediator, Data Union professionals (like Swash) who represent the interests of individual members when coming to terms with data buyers around the globe. These are therefore transactions between parties on a far more equal footing.The suggestion you’d be selling your kidneys is not hyperbole. This is the argument from the EC’s own specialist body, The European Data Protection Supervisor, on the matter:There might well be a market for personal data, just like there is, tragically, a market for live human organs, but that does not mean that we can or should give that market the blessing of legislation.Then as the 2017 report’s next line goes on:One cannot monetise and subject a fundamental right to a simple commercial transaction, even if it is the individual concerned by the data who is a party to the transaction.This last sentence really grates. It belies a real arrogance borne of a desperately paternalistic attitude. Why shouldn’t people have a say in matters that directly affect them. Even more so when they are born of their labour? And it grates even more so given that it is our paternalistic legislator who has been doing all the failing when it comes to protecting privacy. Because all this is being said in an economy in which thousands of companies are already owning and trading our data with each other.4. But privacy tools are just getting warmed up!In my essay, it’s very clear that I did not give due heed to the new privacy tech that people will already be able to use, such as Zero knowledge proofs or completely trustless decentralized systems, software that will bolster the privacy cause immeasurably by making privacy easier for individuals to control. And what about the extra money that has poured into the privacy tech space (largely during the crypto boom of 2017) that is yet to bear developmental fruit? (Don’t forget that crypto is short for cryptography — one of the most central privacy enhancing technologies).A quick rejoinder is this: these are just tools that can also be deployed in a framework of data ownership as well as within a privacy setting. Privacy tools needn’t only be employed within a privacy-centric world view. Rather like putting up blinds for my house — I can both own my data, and encrypt it. They aren’t mutually exclusive. It’s the overall legal/ethical/economic framework that’s most important to get right. The privacy framework still suffers from the critiques made in the original essay, which don’t negate the fact of extra (and more technically complicated) tooling.5. You can’t claim ownership over data — it’s too complicated/ interconnected.Because data is so interlinked between people, how will it be possible for a single individual to own it? Glen Weyl says this: “ My mother’s (date of birth) is also my (mother’s date of birth).” This is of course true. There are hundreds of examples like this. Photos that contain more than the image of yourself. A home address where more than one person lives. How can any individual claim data points like these when the underlying information they communicate has a value generation lineage which could be claimed by so many others, too?It’s a powerful argument but the flaw perhaps is this: it’s almost entirely hypothetical. In the world of actual data sales, useful saleable data generated by individuals isn’t made up of individual unconnected data points. The theoretic doesn’t correspond to reality. Firstly, no one actually wants to buy one birthday. So argue all you want, but the underlying property is valueless (and I conceded that plenty of people have in fact argued over the rights to own near valueless items for the sake of principle).And even a bunch of birthdays is actually just that. Without names attached it’s just a bunch of random dates. Literally anyone could generate that information. In fact even birthdays and full names don’t provide much in the way of saleable data. What sells, what has value to others, are multiple data points from individuals that are linked (usually in chronological fashion). Because those linked data points provide useful information about the world.If we take that as the premise, then linking those data points is the work done in creating the output. If you start linking data points that pertain to you (even though some of them might interconnect to others) you’ll quickly create a data stream that is unique to you as an individual.If that is starting to sound overly complicated, swap the word data for story and you get a better intuitive sense of what is meant. As an author, I can’t own a given word (or data point) in my book. My rights to assert ownership derive from the fact I’ve worked to put a significant number of those words together to form something entirely unique (data stream). Sometimes that can be as short as a haiku. Other times it is War and Peace.Pointing out that data sets are made up of individual data points that can’t be owned because they are common to others is correct. No one can dispute that. But it is akin to pointing at hundreds of pages of Wolf Hall then asserting that Hilary Mantel has no right to intellectual property over those works because she can’t own any individual word because other people use those words.And of course many of these legal arguments about what can and can’t be owned, and in what ways, whether it’s literary, photographic or otherwise, have already been settled (are the ownership rights over data from a Facebook group really any more complicated than a multi-member rock band writing and recording a #1 hit?). Over the centuries, legal precedents have been set. So whilst this might seem complicated within the context of data, for those navigating books, films, or music, those precedents are relatively easily navigated today.There are many synergies here between the established world of creative IP and the up-and-coming world of data ownership. There is plenty of case law already available to inform the numerous disputes that will inevitably arise once data is further instantiated as a new form of property. And that’s okay. Because those disputes, once resolved, will, like with other forms of intangible property ownership, eventually allow for easier navigation and ultimately much better outcomes.6. What about indirect data?So it is that not all data that is generated by the individual is solely about just that individual (interpersonal data), not all data about an individual is generated by that individual (indirect data). How do data ownership and monetisation solve these issues? For now, I’m not sure they do.When it comes to indirect data, I for one believe in the utility of people to collect information on society. Otherwise we might as well close all sociology departments now. The problems come when CCTV cameras (or street lights) can track your every movement or employers own employee work product, or you find yourself in ten years’ time, living in what we benignly call a smart city. A data ownership model doesn’t have a direct answer to this, which still means there’s plenty of room for privacy laws to regulate this sphere of data collection.7. Individuals won’t get enough money to make this worthwhile.This is an argument formulated by people who’ve likely never entered the business of selling personal data (granted: few people have). Sure, data from one app might be worth very little when divided amongst all users, but combine my credit card data with my Netflix, Spotify, Google, Amazon Alexa, Twitter and LinkedIn data, and that’s likely worth hundreds of dollars every year. If these critics had sold their data, they’d know how much user-generated data is worth in those under-the-table markets that already operate secretly every day. And of course not every Data Union will need every single person to join for its data to be valuable. Both now, and in the future, Data Unions just need a sample size of the whole to deliver reliable information to buyers. The point about the future is important because, as Lanier says, the pie will grow.“The point of a market is not just to distribute a finite pie, but to grow the pie. Those who dismiss the value of what people do online have forgotten this most basic benefit of open markets.”Originally published at https://blog.streamr.network on July 2, 2020.You shouldn’t sell your kidneys! Responding to objections to data ownership was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 07. 02

News: the Data Union public...

As of today, you can create and deploy a Data Union using the tooling available in Streamr Core. The Data Union framework, now released in public beta, is an implementation of data crowdselling. By integrating into the framework, app developers can empower their users to monetise the real-time data they create. Data from participating users is sent to a stream on the Streamr Network, and access to the pooled data is sold as a product on the Streamr Marketplace. Any revenue from the data product is automatically shared among the Data Union members and distributed as DATA tokens.Streamr launched the Data Union framework into private beta in October last year, with the Swash app at Mozfest in London. Swash is the world’s first Data Union, a browser extension that allows individual users to monetise their browsing habits. With this public beta launch, we hope to spark the development of even more Data Unions.What’s new in the public beta release?If you’ve used Streamr Core before, you might already be familiar with creating products on the Marketplace. With the introduction of the Data Union framework, the ‘Create a Product’ flow now presents two options: create a regular Data Product, or create a Data Union.Data Unions are quite similar to a regular data product — they have a name, description, a set of streams that belong to the product, and so on. However, there is one important difference; the beneficiary address that receives the tokens from purchases is not the product owner’s wallet — instead it is a smart contract that acts as an entry point to the revenue sharing.The Marketplace user interface guides the user through the process of creating a Data Union and deploying the related smart contract. The Data Union can function even while the product is in a ‘draft’ state, meaning that app developers can test and grow their Data Unions in private, and only publish the products onto the Marketplace once a reasonable member count has been achieved. For the app developer/Product Owner, there are also new controls for: setting the Admin Fee percentage (a cut retained by the app developer/Product Owner), creating App Secrets to control who can automatically join your Data Union, and managing the members of your Data Union.For all published Data Unions, basic stats about the Data Union are displayed to potential buyers on the product’s page.An example Data Union Product overviewDeploying a Data UnionThe process of creating Data Unions and integrating apps with them is now described in the relevant section of the Docs library. Here’s the process in a nutshell:Make sure you have MetaMask installed, and choose the Ethereum account you want to use to admin the Data UnionAuthenticate to Streamr with that account (creates a new Streamr user), or connect that account to your existing profileCreate one or more streams you’ll collect the data intoGo to the Marketplace, click Create a Product flow, choose Data UnionFill in the information for the product and select the stream(s) you createdClick the Continue button to save the product and deploy the Data Union smart contract!Your empty Data Union has been created! Next, you’ll want to integrate the join process and data production into your data source app. The easiest way to accomplish those is to leverage the Javascript SDK, which already includes support for all the Data Union functions. In your app, you’ll want to:Generate and store a private key for the user locallyMake an API call to send a join request (include an App Secret to have it accepted automatically)Start publishing data into the stream(s) in the Data Union!Again, detailed integration instructions are available in the Docs.Data Unions present an opportunity for app developers to reward users for sharing their data, giving Data Union products a competitive advantageSo what’s next?The public beta is feature-complete in the sense that all the basic building blocks are now in place. Over the next couple of months, we’ll be addressing any loose ends, such as bringing the DU functionality to the Java SDK and adding tooling for Data Union admins to manage their member base.We’ll also be monitoring the system closely, in the hope that the public beta phase will help reveal any remaining issues. Please do expect to encounter some hiccups along the way — none of this has been done before! If all goes well during the public beta, we’re looking to officially launch Data Unions in Q3 this year. The launch will be accompanied by a marketing campaign and some changes to the website to highlight the new functionality.If you have an idea for a Data Union, take a look at the Docs to get started. The Streamr Community Fund is also here to offer financial support to the development of your project — you can apply here. We’re also happy to answer all your technical questions in the community-run developer forum and on Telegram.Originally published at blog.streamr.network on June 18, 2020.News: the Data Union public beta is now live was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 06. 18

Dev Update May 2020

Welcome to Streamr dev update for May 2020! Looking back at last month’s update, we’re happy to realise that many major development strands that were still seeking solutions only a month ago, have now found them and fallen nicely into place. Here’s a few hand-picked highlights from May:Solved all remaining problems blocking the upcoming Network whitepaperStarted testing the WebRTC flavour of the Network at scaleGot the Data Unions framework relatively stable in advance of entering public betaStarted planning a roadmap towards the next major Data Unions upgradeCompleted Phase 0 of token economics research with BlockScienceThe Network whitepaperFor over 9 months now, a few people in the Network team have been hard at work at documenting and benchmarking the Network at scale. The deliverable of that effort is an academic paper, intended for deep tech audiences, in both the enterprise and crypto spaces, as a source of detailed information and metrics about the Network.This blog post from September outlined the approaches and toolkit we were using to conduct the experiments, but the road to the goal turned out to be quite complicated. We’ve sort of learned to expect the unexpected, because pretty much everything we do is in uncharted territory, but trouble can still come in surprising shapes and sizes.We worked steadily on setting up a sophisticated distributed network benchmarking environment based on the CORE network emulator, only to ditch it several months later because it was introducing inaccuracies and artifacts into our experiments at larger network sizes of 1000 nodes or more. We then activated Plan B, which meant running the experiments in a real-world environment instead of the emulator.We chose 16 AWS data centres across the globe and ran between 1 and 128 nodes in each of them, creating Streamr Networks of 16–2048 nodes in size. The new approach was foolproof in the sense that the connections between nodes were real, actual internet connections, but running a large-scale distributed experiment across thousands of machines brought its own problems. I’ll give some examples here. First of all, it needed pretty sophisticated orchestration to be able to bring the whole thing up and tear it down in between experiments. Secondly, accurately measuring latencies required the clocks of each machine to be synchronised to sub-millisecond precision. Thirdly, the resulting logs needed to be collected from each machine and then assembled for analysis. None of these things were necessary in the earlier emulator approach, but the reward for the extra trouble was accurate, artifact-free results from real-world conditions, adding a lot of relevance and impact to the results.During May, we finally got each and every problem solved, and managed to eliminate all unexpected artifacts in the measured results. Right now we are finalising the text around the experiments and their results, and we are expecting the paper to become available on the project website in July.Network progress towards BrubeckWorking towards the next milestone, Brubeck, means making many important improvements. One of them is enabling nodes behind NATs to connect to each other, which will allow us to make each client application basically a node. This, in turn, helps achieve almost infinite scalability in the Network, because then clients will help propagate messages to other clients. The key to unlocking this is migrating from websocket connections to WebRTC connections between nodes. This work is now in advanced stages, although we are still observing some issues when there are large amounts of connections per machine. Having developed the scalability testing framework for the whitepaper comes in handy here; the correct functioning of the WebRTC flavour network can be validated by repeating the same experiments and checking that the results are, in line with the ones we got with the websocket edition.Another step towards the next milestone is making the tracker setup more elaborate. Trackers are utility nodes that help other nodes discover each other and form efficient and fair message broadcasting topologies. When the Corea milestone version launched, it supported only one tracker, statically configured in the nodes’ config files, making peer discovery in the Network a centralized single point of failure; if the tracker failed, message propagation in the Network would still function, but new nodes would have trouble joining, over time deteriorating the Network. Thanks to recent improvements, the nodes can now map the universe of streams to a set of trackers, which can be run by independent parties, allowing for decentralization. Trackers can now be discovered from a shared and secure source of truth, a smart contract on Ethereum mainnet, which in the future could be a token-curated registry (TCR) or a DAO-governed registry. The setup is somewhat analogous to the root DNS servers of the internet, governed by ICANN — only much more transparent and decentralized.Ongoing work also includes improving the storage facilities of the Network. Storage is implemented by nodes with storage capabilities. They basically store messages in assigned streams into a local Cassandra cluster and use the stored data to serve requests for old messages (resends). The current way we store data in Cassandra has been problematic when it comes to high-volume streams, leading to uneven distribution of data across the Cassandra cluster, which in turn leads to query timeouts and failing resends. In the improved storage schema, data will be more evenly distributed, and this kind of hotspot streams shouldn’t cause problems going forward. As a result, the Network will offer reliable and robust resends and queries for historical data.There’s also ongoing work to upgrade the encryption capabilities of the Network — or more specifically the SDKs. The protocol and Network have actually supported end-to-end encryption since the Corea release, but the official SDKs (JS and Java so far) only implement end-to-end encryption with a pre-shared key. The manual step of pre-sharing the encryption key limits the usefulness of the feature. The holy grail here is to add a key exchange mechanism, which enables publishing and subscribing parties to automatically exchange the decryption keys for a stream. This feature is now in advanced stages of implementation, and effortless encryption should become generally available during the summer months.Data Unions soon in public betaThe Data Unions framework is approaching a stable state. In the April update, we discussed some issues where the off-chain state of the DUs became corrupted, leading to lower than expected balances. All known issues were solved during May, and the system has been operating without apparent problems since then.The Data Unions framework has been in private beta since late last year, with a couple of indie teams ( Swash having made the most progress so far) building on top of it. During private beta, we’ve been working on stability, documentation, and frontend support for the framework. We’re now getting ready to push the DU framework into public beta, which means that everyone can soon start playing around with it. The goal of the public beta phase over the summer months is to get more developers hands-on with the framework, and to iron out remaining problems that might occur at larger-scale use (and abuse).We’ve also started planning the first major post-release upgrade to Data Unions architecture, which improves the robustness and usability of the framework. We are currently working on a proof of concept, and we’ll be talking more about the upgrade over the course of the summer.Phase 0 of token economics research completedAs was mentioned in one of the earlier updates, we started a collaboration with BlockScience to research and economically model the Streamr Network’s future token incentives. It’s a long road and we’ve only just started, but it’s worth sharing that in May we reached the end of Phase 0. This month-long phase was all about establishing a baseline: transferring information across teams, establishing a glossary, documenting the current Network state and future goal state, and writing down what we currently know as well as key open questions.The work continues on an ongoing basis with Phase 1, the goal of which is to define mathematical representations of the actors, actions, and rules in the Streamr Network. In future phases, the Network’s value flows will be simulated, based on this mathematical modeling, to test alternative models and their parameters and inform decisions that lead to incentive models sustainable at scale.Looking forwardBy the next monthly update, we should have Data Unions in public beta, and hopefully also the Network whitepaper released. Summer holidays will slow down the development efforts over July-August, but based on the previous summers, it shouldn’t prevent us from making good progress.To conclude this post, I’ll include a bullet-point summary of main development efforts in May, as well as a list of upcoming deprecations that developers building on Streamr should be aware of. As always, you’re welcome to chat about building with Streamr on the community-run dev forum or follow us on one of the Streamr social media channels.NetworkExperiments for the Network whitepaper have been completed. Finalising text content nowJava client connection handling issues solved. Everything running smoothly again, including canvasesThe Network now supports any number of trackersBrokers can now read a list of trackers from an Ethereum smart contract on startupWebRTC version of the Network is ready for testing at scaleToken economics research Phase 0 completedWorking on a new Cassandra schema and related data migration for storage nodes.Working on key exchange in JS and Java clients to enable easy end-to-end encryption of data.Data UnionsData Union developer docs are completeProblems causing state corruption were fixedStarted planning a major architectural upgrade to Data UnionsCore app, Marketplace, WebsiteStreamr resource permissions overhaul is doneBuyer whitelisting feature for Marketplace is doneWorking on adding terms of use, contact details, and social media links to Marketplace productsWorking on a website update containing updates to the top page, a dedicated Data Unions page, and a Papers page to collect the whitepaper-like materials the project has published.Deprecations and breaking changesThis section summarises deprecated features and upcoming breaking changes. Items with dates TBD are known already but will occur in the slightly longer term.(Date TBD): Authenticating with API keys will be deprecated. As part of our progress towards decentralisation, we will eventually end support for authenticating based on centralised secrets. Integrations to the API should authenticate with the Ethereum key-based challenge-response protocol instead, which is supported by the JS and Java libraries. At a later date (TBD), support for API keys will be dropped. Instructions for upgrading from API keys to Ethereum keys will be posted well in advance.(Date TBD): Publishing unsigned data will be deprecated. Unsigned data on the Network is not compatible with the goal of decentralization, because untrusted nodes could easily tamper data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased prior to reaching that. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above), which enables data signing by default.Originally published at blog.streamr.network on June 16, 2020.Dev Update May 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 06. 16

Streamr project update, Jun...

The global moment we find ourselves in today is unlike any other. It has thrown up uncertainty and confusion, yet at the same time it has shed light on something that has always been central to the Streamr vision — the enduring strength and importance of community; the ways in which our connections are what strengthen us in good times and bad.The Streamr community is not only invaluable to the growth of the ecosystem and the ultimate success of the Streamr project, it energises the team with ideas, challenges and discourse on a daily basis. So this felt like a good time to check in with you all, to review everything we have accomplished since launch, with your support. I’ll also be running an AMA on the 11th of June at 15:00 CEST, but this article can give an overview ahead of that, and perhaps prompt some questions for discussion. Let’s begin by reminding ourselves of the vision.The Streamr VisionStreamr was founded with the goal of building a real-time data infrastructure for future data economies. Ideally, all data streams in the world could be accessed via your nearest node, with participants incentivised to provide both content and delivery services on the system.The cornerstones of Streamr’s chosen approach are decentralization, peer-to-peer and blockchain. This is because, in our view, the only acceptable implementation of a future data infrastructure is one that is global, scalable, secure, robust, neutral, accessible and permissionless.Members of the Streamr team before the Network launch pier-to-pier boat party at DevCon5Decentralization ticks all the boxes (though it may not be the only solution — that remains to be seen). While a system built on fiat currencies and centralized technology could, if done skillfully, achieve sufficient user-facing functionality, it would always be heavily influenced by the commercial business goals of the commercial party operating it. The backbone of the global data economy should not serve someone’s business goals — it should serve everyone’s business goals.If the operation and governance of a system is distributed across many independent parties in different jurisdictions and geographies, with a diverse range of commercial interests, no individual party or set of parties can compromise it. And that’s when it becomes truly unstoppable.The Streamr vision is the foundation of what we do and why we do it. It is largely unchanged since launching the project in 2017, and continues to be the steady beacon that defines who we are.So what are we doing in 2020?In working towards the Streamr vision, the goal we have steadfastly pursued is delivering the roadmap laid out in the 2017 whitepaper. And we are well on our way to achieving that goal.Milestone 1 is complete, with the successful launch of the Streamr Marketplace in 2018, and the launches of Core and the Network last year. Since late last 2019, we’ve been working on Milestone 2, the main aim of which is to progress the Network towards token economics and decentralization. Much of the work in this Milestone focuses on removing technical obstacles for scalability and decentralization, and commencing research on token economics.In January of this year, Streamr ran an internal developer ‘Networkshop’ which addressed some of these obstacles head-on. Here, the Streamr dev team debated multiple development areas, generated new ideas and, above all, came away with the next steps to creating a network that is strong, secure and scalable. As we go forward, this process will involve but not be limited to: moving to a ‘clients as nodes’ model, ensuring network messages are signed and encrypted, and enhancing the systems by which we prevent network attacks.Members of the Streamr team at the Network GTM workshop in HelsinkiThe ‘Networkshop’ discussions around token economics were foundational — we defined the questions that we at Streamr need to answer in order to guide our token approach. Token economics are crucial because they are the mechanism by which the network captures the value created by user adoption. The mechanism incentivises people to participate in running the network, which enables decentralization, which enables the vision. We have recently begun research into token economics with BlockScience, and the project’s token economics will be designed during Milestone 2 and implemented in Milestone 3.Another big deliverable related to the Network is the scalability research for the current milestone. The goal of that research is twofold: to show how the bandwidth requirements for publishers stay constant regardless of the number of nodes, and to prove that the network has good and predictable latency, which grows logarithmically with the number of subscribing nodes. Both of these properties are very desirable for Streamr from a scalability point of view. In recent experiments, we observed the selected metrics in networks of up to 2048 nodes, running in real-world conditions, distributed to 16 different AWS locations globally.There were some major setbacks along the way. For example, we had to abandon the initially planned emulator approach because the emulator was adding severe artifacts to the measurements for large network sizes. Having to resort to actual real-world experiments made the process much slower and more expensive (spinning up thousands of virtual machines on AWS is not exactly cheap), but on the bright side, the results carry much more impact because they represent real-world performance. I have to say, the results turned out great and very competitive, even against the best centralized message brokers! The research will be published as a Network whitepaper very soon — the supporting text about the results is being finalised right now. The paper will be a crucial document for anyone thinking about utilising the Network for any business-critical or large-scale purpose.Data UnionsAnother big piece of work for 2020 is bringing the Data Union framework to market. The Data Union framework is our implementation of data crowdselling, a redistribution of data ownership which means that individuals can regain ownership over their personal data, rather than just the tech giants (who hoard and sell user data under the protection of T&Cs).Under the Data Union framework, users can pool their own data with that of other users, allowing them to increase their data’s value and then sell it in a Data Union, via the Streamr Marketplace.The functional flow of a Data UnionThe Data Union framework has been in private beta since late last year. One of the app teams with early access is Swash, which has seen steady organic adoption since it was first demoed at Mozfest last year. Users can also hide any data they prefer to keep private, thus returning choice and autonomy to the individual. This is a major step forward in our vision for a new data economy.And personal data ownership is something that people want, at least according to a research project that we ran in January. As our research partners at Wilsome stated: “Once we explained and demonstrated the concept of crowdselling and Data Unions, most people liked it, and some loved it.”Right now our focus is supporting the creation of more Data Union products like Swash, finalising developer documentation to support that, and fixing all bugs uncovered during the early-access phases. The full launch is taking place in autumn, with a marketing push to promote this disruptive new framework in the personal data monetisation space.We also started planning what the next big upgrade to the framework might look like; a ‘version 2.0’. In this new approach, the operator/validator model, Merkle proofs, and freeze period required by Monoplasma might be replaced with a side-chain plus inter-chain bridge to advocate a fully on-chain approach for better robustness and security, as well as fast withdrawals.Enterprise adoptionPartnerships have played a significant role in Streamr’s growth. Based on what we have learned over the last year of business development, we have made some changes to our enterprise partnerships tactics.2017–2018 saw a surge of excitement around paper partnerships in the space, but they were predominantly PR-led and rarely led to real adoption. Therefore in 2019, we set up TX as a vehicle to secure solid partnerships by systematically searching out actual, value-adding use cases, and by having the capability to offer solutions and services on top of the technology.At the height of the crypto hype, partnerships efforts across the space were focused on publicly telling stories about future collaboration. The new, down-to-earth approach is quite the opposite: once the enterprises really start building new capabilities by piloting new technology, they tend to keep quiet about it, and put NDAs in place to ensure their partners keep quiet about it too. Since the goal is no longer to produce news about partnerships every week, to an outside observer things may seem a bit quiet. This is the trade-off between talking about things and actually doing things, and in our view the latter is the only approach that can lead to actual substance, value creation, and serious adoption of the technology.TX enterprise adoption modelBringing in commercial drivers solved the paper partnerships problem: only serious enterprises are willing to pay for the work needed, and getting paid for the work actually enables TX to participate in those projects (as opposed to the unsustainable model of the project team having to spend time on partner projects for free). This approach has been effective in terms of bringing interactions to the table that are serious, concrete and commercially grounded. TX has the liberty to pursue prospective partnerships as they see fit, both pioneering a model for anyone to create a solutions business on top of Streamr, as well as allowing Streamr project resources to be spent more effectively on delivering the roadmap and advancing the vision.In tandem with this fresh partnership approach, this year we established a Growth team within Streamr. The team’s core objective will be to increase adoption across the board, with a particular focus on Data Unions in 2020. This objective will be accomplished via a multi-prong strategy of user feedback, research, nurturing the developer ecosystem, special project commissions and of course our ability to involve TX in enterprise partnerships.FinancesAt the time of writing, two and a half years into the project, around half of the funds raised in the token launch have been spent. The beginning of the project saw a spending peak as a result of initial set-up costs: legal and other crowdfunding-related expenses, team-building and setting up offices. Before the crypto market crash, spending was more liberal across the industry (anyone remember the boat party around Consensus 2018, where the organisers gave away two Aston Martins to random attendees?). While none of our project funds were lost in the crash, we have since introduced a more restrained approach to spending.Milestone 1 was significant, covering much more than one-third of the development work, and a little bit more than one-third of the project budget. We’re on track to complete the project within its planned schedule (five years) and budget, and we should see a reduction in our expenses towards the end, when development work starts to approach completion. As an example of how our work up until now will manifest in more sustainable spending, TX will make enterprise partnership efforts self-sustainable, which saves project funds and helps extend the lifespan of the tech infinitely. Mechanisms for funding the long-term maintenance of the technology far beyond the crowdfunded phase can also be included in the Network token economics and/or on the Marketplace — ideas that we’re exploring as part of the token economics research track.Community Fund / Growing the Streamr ecosystemThis year we also added another important layer to the Streamr Community Fund. We launched the fund in 2018 with 2,000,000 in DATA to empower community initiatives using our platform, and since then have funded several developers and projects with over 1,000,000 in DATA from the fund. But we realised that the one thing that DATA can’t buy is the kind of experience we have in our team at Streamr.Developers supported by the Community Fund have received advice from our skilled team about the tech and potential marketing strategies, and they can now receive guidance from members of the newly-formed Streamr Data Union Advisory board — industry leaders, veterans and academics who advocate for personal data ownership.UX of MyDiem, backed by the Community FundResponse to COVID-19The effects of the coronavirus pandemic are still unknown. Enterprises may withhold investment when it comes to exploring and piloting new technology in 2020. It’s possible that an economic downturn may impact the willingness to pilot cutting-edge technologies in the enterprise sector. Diminished demand also could impact TX as our partners face their own unique challenges, which may have a knock-on effect on maintaining self-sustainability. Thus far, Streamr remains robust in the face of this crisis, and if the situation doesn’t extend too far into 2021, we can remain on track with the progress we’ve made.ConclusionSo here we are — halfway through #buidling, with some major milestones behind us and well on our way towards the milestones ahead. These are uncertain times, there’s no doubt about that, but decentralization, self-sovereignty, and empowering people with control over their data and finances haven’t lost their importance. Quite the opposite, actually. With governments printing money for rescue packages, as well as leveraging personal data under martial law, these are even hotter topics than before. The innovations we come up with today will define the societies we live in tomorrow.We’ll continue to hold on to our values and the bets we’ve made, and keep working through Milestones 2 and 3 towards a more decentralized, more efficient, more empowered future.If you have any questions or comments about this update, be sure to join the AMA on the 11th of June at 15:00 CEST. Save the meeting link and post your questions in advance in this thread.Streamr project update, June 2020. was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 06. 02

News: US politician James F...

News: US politician James Felton Keith joins Streamr Data Union Advisory Board, among several othersThe US politician and entrepreneur, James Felton Keith (JFK), an outspoken advocate for fair remuneration in exchange for personal data, has joined Streamr’s newly inaugurated Data Union Advisory Board.The Advisory Board will guide Streamr in its endeavour to empower internet users through the use of Data Unions. Data Unions allow internet users to crowdsell their information for the first time in the internet’s history, whether it’s their musical preferences via a Spotify Data Union, or their browsing history via Swash, the first Data Union in the Streamr ecosystem. Instead of tech giants, individuals can reclaim control and ownership over their own data.As privately-commissioned research by Streamr has shown, internet users are indeed eager to sell their data. However, until recently, there hasn’t been a marketplace for private data vendors.“I believe that personal data is an individual’s property. And, as such, individuals deserve to receive a fair share of the value they’re co-creating. So far we’ve been lacking the infrastructure to do so, Data Unions are the way to go so everyone can receive a data dividend.” — JFKAlong with JFK, industry veterans and academics alike have joined Streamr’s effort to unlock the hidden value of personal data through Data Unions, and create a fair and just data economy. Other prominent members of the Streamr Data Union Advisory Board include the Italian economist Maria Savona, who is a professor of Innovation and Evolutionary Economics at the University of Sussex in the UK. Maria is a former member of the High Level Expert Group on the Impact of Digital Transformation on EU Labour Markets for the European Commission.“One of the main challenges of the data economy is unpacking the black box of large tech’s business models, in order to understand the massive private value concentrations stemming from personal data. We need to build on the existing European legal frameworks we’ve been provided with, like the GDPR, to go beyond protecting privacy, allowing individuals to have broader agency on personal data, and be given choices on whether and how to share their data freely through intermediaries such as Data Unions.”RadicalxChange’s president, Matt Prewit, has also signed up to the board. At RadicalxChange, he advocates for a reform of the data economy and is a known voice in the Web 3 space. And this, in essence, mirrors what the Data Unions framework is doing — bringing the best of the Web 3 space into the Web 2 stack; decentralizing power through improved user control and the ability to monetise data evenly.“Currently, we’re seeing a mismatch between value creators online and those who extract value online. I think that, through the integration of Web 3 technologies, we can rebalance the power dynamics of today’s internet. Through Data Unions, value creators get the opportunity to reclaim their ownership and to get remunerated fairly.”Other members of Streamr’s new Data Union Advisory Board include Arnold Mhlamvu, Brian Zisk and Peter Gerard — all three music and film industry veterans who will help Streamr make Spotify or Netflix Data Unions a new normal. Mhlamvu launched Beatroot Africa, the fastest-growing digital content distribution company in Africa. Zisk produces conferences, including the Future of Money Summit, the SF MusicTech Summit, and other events including hackathons. He is also a seed investor and advisor to Chia Network. Gerard is an award-winning filmmaker and entrepreneur, and a leading expert in marketing and distribution for films and series.With Alex Craven, the board has acquired a seasoned Data Union advocate. Since 2014, Alex has been exploring the issues surrounding personal data and trust, working on a Data Mutual Society concept. He is now the founder of the gov-tech startup The Data City, and has joined Streamr previously at Mozfest, to talk about Data Unions live on stage as part of the ‘Should we sell our data?’ panel.And last but not least, Davide Zaccagninihas also joined the Streamr Data Union Advisory Board. Davide is a former surgeon and informatics researcher at MIT. While covering leadership positions in US startups and corporations, he served on the Advisory Board of the W3C. He will help Streamr navigate the complex world of standards and regulations when it comes to introducing the Data Union framework as a new global tool.News: US politician James Felton Keith joins Streamr Data Union Advisory Board, among several… was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 05. 20

Dev Update April 2020

Welcome to the Streamr dev team update for April 2020. A lot has happened in the last month, so let’s kick off. Streamr has:Released a new version of MarketplaceIntegrated Uniswap exchange functionalityTested multi-tracker support for the NetworkContinued WebRTC implementation for the NetworkA quick editor’s note: we are adding a new section to our monthly dev update titled Deprecations and breaking changes. As you might have guessed, it is to keep all developers building on top of Streamr ecosystem informed about upcoming deprecations (features or components not supported anymore) and breaking changes (alteration in functions, APIs, SDKs and more, that require a code update from the developer side).The newly deployed Marketplace contains a suite of analytics that users can explore on published Data Union products — number of users belonging to a particular Data Union, aggregated earning, estimated earning potential per user and more. Here you can see the example for Swash, published on the Marketplace. Note that Swash is still in its beta phase and the Data Union Product has been migrated to a newer version of a Data Union smart contract, so the current metrics don’t show the full picture.Additionally, we also deployed a long-awaited Uniswap integration on the Marketplace. Thanks to this decentralized exchange (DEX), data buyers now can now use either ETH or DAI to pay for a subscription, in addition to DATA coins. This is an important milestone because it simplifies the purchase process, which had caused some friction for new users.Recently, the Network developer team finished testing a multi-tracker implementation. For any readers who are not yet familiar with the role a tracker plays in the Network, our core engineer Eric Andrews wrote the following in his recent blog post on the Network workshop:An important part of the Network is how nodes get to know about each other so they can form connections. This is often referred to as ‘peer discovery’. In a centralized system, you’ll often have a predetermined list of addresses to connect to, but in a distributed system, where nodes come and go, you need a more dynamic approach. There are two main approaches to solving this problem: trackerless and tracker-based.In the tracker-based approach, we have special peers called trackers whose job it is to facilitate the discovery of nodes. They keep track of all the nodes that they have been in contact with, and every time a node needs to join a stream’s topology, they will ask the tracker for peers to connect to.A representation of the physical links of the underlay networkNow that we have finished testing the tracker model, the next step is to try to create an on-chain tracker registry and let Network nodes read the tracker list directly from there. This can be accomplished via a smart contract on the Ethereum network, so that this whole process of peer discovery can be handled in a decentralized way. In future, richer features could be deployed for the tracker registry, such as reputation management and staking to lower possibility of misbehavior or network attacks. The team made further progress on the Network side with the gradual implementation of WebRTC for the nodes. We recently ran an experiment, running over 70 WebRTC nodes on Linux local environment, and results were promising. That gave us additional assurance to proceed with the full implementation.Regarding the Data Union development progress, we noticed there have been some performance issues and potential bugs on the balance calculation, due to Data Union server restarting. We sincerely apologize for the inconvenience caused, and we are working to improve the Data Union architecture to guarantee higher stability before the official public launch later this year.Deprecations and breaking changesThis section summarizes all deprecated features and planned breaking changes.June 1st, 2020: Support for Control Layer protocol version 0 and Message Layer protocol versions 28 and 29 will cease. This affects users with outdated client libraries or self-made integrations dating back more than a year. The deprecated protocol versions were used in JS client libraries 0.x and 1.x, as well as Java client versions 0.x. Users are advised to upgrade to newest libraries and protocol versions.June 1st, 2020: Currently, resources on Streamr (such as streams, canvases, etc.) have three permission levels: read, write, and share. This will change to a more granular scheme to describe the exact actions allowed on a resource by a user. They are resource-specific, such as stream_publish and stream_subscribe. The upgrade takes place on or around the above date. This may break functionality for a small number of users who are programmatically managing resource permissions via the API. Updated API docs and client libraries will be made available around the time of the change.Further away (date TBD): Authenticating with API keys will be deprecated. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets. Integrations to the API should authenticate with the Ethereum key-based challenge-response protocol instead, which is supported by the JS and Java libraries. At a later date (TBD), support for API keys will be dropped. Instructions for upgrading from API keys to Ethereum keys will be posted well in advance.Further away (date TBD): Publishing unsigned data will be deprecated. Unsigned data on the network is not compatible with the goal of decentralization, because untrusted nodes could easily tamper data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased prior to reaching that. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above), which enables data signing by default.Below is the more detailed breakdown of the month’s developer tasks. If you’re a dev interested in the Streamr stack or have some integration ideas, you can join our community-run dev forum here.As always, thanks for reading.NetworkMulti-tracker support is done. Now working on reading tracker list from smart contractMoving forward with WebRTC implementation after local environment testingContinuing fixes for Cassandra storage and long resend issues.Data UnionsSome Java client issues were affecting Data Union joins, but these should all be fixed nowImproved Data Union Server monitoring. Join process is being continuously monitored in productionTeam started implementing storing state snapshots on IPFSJS client bugs fixes to solve problems with joins in Data Union serverData Union developer docs are being finalisedCore app (Engine, Editor, Marketplace, Website)Implementing UI for managing buyer whitelist for MarketplaceNew Marketplace version has been deployed with Data Union metricsNew product views and Uniswap purchase flow are now live.Dev Update April 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 05. 19

Our thoughts on the EU Data...

A few weeks ago some team members of the Streamr project attended the MyData Global community meeting where the recent EU Data Strategy paper was discussed in detail. For those of you not familiar with the organisation, MyData Global is an NGO, working on transforming the EU’s GDPR from legal into actionable rights. We recently became official members and signed the MyData declaration, which promotes “moving towards a human-centric vision of personal data.”Why is the EU Data Strategy important to us?The Data Union framework we’re developing here at Streamr builds on the premise outlined in the GDPR’s article 20 on data portability, namely that:“The data subject shall have the right to have the personal data transmitted directly from one controller to another.”Data portability grants us the right to take the data we’ve created on one platform with us to another platform of our choosing. However, the law grants platform providers a 30-day period to make data “portable” and furthermore does not give concrete guidelines on the format in which the data is handed over. But what if people want to port, or sell their data in real-time? And yes, they do.Legal rights need to become actionable rightsThis is one of the topics addressed by the new EU Data Strategy. MyData’s board member Teemu Ropponen, argues that we need to:“Move from formal to actionable rights. The rights of GDPR should be one click rights. I should not go through hurdles to delete or port my data. We need real-time access to our rights.”Individual users should have the agency to control data about themselves. At the same time, we recognise the immense potential open access to data would bring. Digital businesses require the use of personal data but, beyond that, researchers, startups, SMEs and governments can profit from a more democratised, open access.MyData Global has a goal to develop a fair, prosperous, human-centric approach to personal data. That means that people get value from their own data and can set the agenda on how their data is used. In order to make this a reality, the ethical use of personal data needs to be promoted as the most attractive option to businesses.Europe is falling behind in the Data EconomyViivi Lähteenoja, another MyData Global board member, pointed out during her presentation that Europe realises that it’s falling behind when it comes to its share in the data economy. But there is still time to change this. As stated in the recent EU Data Strategy paper:“The stakes are high, since the EU’s technological future depends on whether it manages to harness its strengths and seize the opportunities offered by the ever-increasing production and use of data. A European way for handling data will ensure that more data becomes available for addressing societal challenges and for use in the economy, while respecting and promoting our European shared values.”Data is absolutely crucial in solving today’s issues. Just consider the apps that are currently being built to tackle the outbreak of the Covid-19 pandemic. Developing the right tools for our society will become much easier when access to high-quality data sets becomes far easier. One important point in this will be the facilitation of data-sharing on a voluntary basis.The EU wants to tackle this problem head-on by the creation of a European data space. This is not supposed to be about ‘one platform to rule them all’, but an ecosystem of ecosystems where all data is dealt with in accordance with European laws and values. Its creation is one of the main goals of the European Data Strategy:“Those tools and means include consent management tools, personal information management apps, including fully decentralized solutions building on blockchain, as well as personal data cooperatives or trusts acting as novel neutral intermediaries in the personal data economy. Currently, such tools are still in their infancy, although they have significant potential and need a supportive environment.”Personal Data Spaces — The EU’s version of Data UnionsTo create better governance and control around personal data, the EU will create so-called Personal Data Spaces. These spaces will serve as neutral data brokers, between internet users and platform providers. But, as the strategy paper notes, there is currently a lack of tools for people to exercise their rights and gain value from data in a way that they want.At Streamr, we have little doubt that our open source Data Unions framework will provide just the tools the EU is searching for and will therefore play a central role in bringing about this vision.But here’s the catch. Sitting on the frontlines of data portability, we know that in order to make these tools a reality, the law needs to be strengthened. And soon. GDPR Article 20 needs to be amended through the European Data Act, which is to be passed in 2021, so that it allows users one-click rights. As the board members of MyData rightfully note: “Right now data portability is not good enough, what is needed is live portability.”The EU Data Strategy was published in February and you can take a look here. If you’re interested in watching some of the presentations from the last MyData meeting, take a look here.Our thoughts on the EU Data Strategy was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 05. 06

Transaction History
Transaction History Address
Binance Short cut
Coinone Short cut
Security verification

There is security verification of the project
Why security verification is necessary?

Read more

comment

* Written questions can not be edited.

* The questions will be answered directly by the token team.

Information
Platform ERC20
Accepting
Hard cap -
Audit -
Stage -
Location -