Streamr network

탈중앙화 데이터 프로토콜

home link https://streamr.network/

reference material

Community

Exchanges that listed the coin
2
Symbol
DATA
Dapp
To be released
Project introduction

Streamr Network는 임무는 실시간 데이터를 위한 분산 인프라를 구축하여 중앙 메시지 브로커를 글로벌 피어 투 피어 네트워크로 대체하는 것입니다.

Executives and partners

Henri Pihkala

CEO

Risto Karjalainen

COO

Nikke Nylund

Co-Founder

MOBI

Fastems

golem

Latest News

There is no news posted at the moment
Please leave a message of support for the team

go to leave a message

Medium

We’re partnering with Lumos...

We’re excited to announce that we’re bringing Streamr to one of the biggest developer talent pools in the world — India. At Streamr, we’re always looking for opportunities to push innovation in the realm of data innovation and build decentralized data economies. With this vision, we’re partnering with Lumos Labs — a tech ecosystem enablement startup, to bring you the Streamr Data Challenge.Tapping into India’s PotentialIndia holds the potential to build a decentralized data economy. Presently, it is leading the global data consumption market with the increasing mobile data connectivity (3G/4G), falling data tariffs, rising smartphone penetration, and growth in broadband connectivity across India. The exponential data growth in India is projected to continue, with internet traffic expected to increase 4x from 21 exabytes in 2016 to an estimated 78 exabytes in 2021, as per the report by Omidyar Networks. At Streamr, we see an opportunity to incentivise Indian developers to leverage our platform to innovate and reinvent solutions to the biggest problems in India’s booming big data arena.The Streamr Data ChallengeThe Streamr India Data Challenge is focused on opening up India’s tech community to the problems that people face or will face with respect to data privacy, ownership, value, and most importantly — preserving their data dignity. On that note, we’re challenging the Indian developer community to join us and build solutions to real-world problems in big data with blockchain technology, by leveraging Streamr’s Data Union framework.The 4-month-long Data Challenge will also host meetups and webinars to help the Indian tech community learn about leveraging blockchain for big data problem statements and learning to build with Streamr. We will also begin with a one-week long mentor session, followed by a two-week-long intensive acceleration period. The ongoing support ranges from access to tech guidance on projects, networking opportunities, PR support and more.The winners will get a cash prize of $5,000 USD and the teams shortlisted in the first cohort will receive a $200 USD grant each.About the PartnershipOur partners — Lumos Labs, are experts at hosting open innovation programmes to encourage and incentivise innovators to push the boundaries and bring solutions to problems faced by consumers and business. We will work with them closely to create a Streamr community in India by kicking off the Streamr India Data Challenge. “We’re excited to step into the thriving tech ecosystem that India is. We’re sure that Streamr would be a well-received platform that can push innovation in the big data space,” said Streamr’s Head of Developer Relations, Matthew Fontana. “We’re excited to join hands with Lumos Labs and support the Indian developer and startup community to take on the biggest problems in big data, namely data ownership and value sharing.”Raghu Mohan — Co-founder & CEO, Lumos Labs, on behalf of his team, has expressed his enthusiasm to work with us. “We’re excited to begin work with Streamr to bring India’s tech community a new challenge with the Data Challenge,” he said. “Incentivising the Indian tech community to build solutions in this space is a huge step towards enabling innovation,” he added.— —We’re excited to see what unfolds with the Streamr Data Challenge. You can learn more about Streamr here: streamr.network and register for the Data Challenge here: streamrdatachallenge.comStay tuned for more!Originally published at blog.streamr.network on October 20, 2020.We’re partnering with Lumos Labs to bring to you the Streamr Data Challenge was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 20

News: Streamr signs pilot a...

News: Streamr partners with GSMA to deliver Data Unions to the mobile sectorToday, Streamr is announcing a partnership with GSMA, the industry body for mobile telecom communications. Streamr and GSMA have partnered to allow three mobile network operators (MNOs) to monetise their user data ethically.GSMA and Streamr will work together to deliver a technological accelerator programme to selected mobile network operators (MNOs). This initiative aims to fast track potential adoption of new technologies that permit users to share and monetise mobile device data in partnership with operators.The 90-day program, billed by GSMA as an “exciting opportunity” to pilot new privacy-centric technology, seeks to support new approaches to user data monetisation and help these innovations scale. The program is designed to allow telcos and their users to jointly access billions in new revenue from the consumer insights market, in a manner that complies with regulatory environments.The new monetisation methods are likely to receive support from Brussels in the Data Services Act next year.“Given regulatory changes, and rapidly changing consumer attitudes to both privacy and the value of their data, the only sustainable way for MNOs to monetise mobile data, is by gaining overt consent from their users.“We also know that the consumer insights industry is desperately underserved when it comes to data from mobiles. We are confident that Streamr’s revolutionary Data Union framework will allow them to capture and record this consent dynamically and securely,” said Shiv Malik, Head of Growth at Streamr.Using a smartphone app, end-users will be asked outright if they want to opt in to join a Data Union to sell their data in partnership with their network operator, and they will also be asked what exact data they would like to sell in the process. No data will be actually sold as part of the pilot.Streamr co-founder Henri Pihkala added, “It’s very exciting to consider the potential for this pilot. MNOs are ideally positioned to unlock the rich customer insights that their subscribers create on their devices each day. Privacy-focused data monetisation that works with those users, presents a significant new income stream, as the industry faces multiple pressures on existing revenues”.Immediate use cases for the pilot’s data include consumer footfall and mobility intelligence for brands, retail operators and commercial landlords, as well as for use in events management and city planning applications.The pilot will also include a significant research element, gathering user experiences on the ability to control how, and with whom, their data is shared — as well as how they feel about receiving a share of its value in the future. Learnings will also inform network user retention strategies.— -Listen to Shiv Malik, Streamr’s Head of Growth, talking about Data Unions in a recent interview with the BBC’s Digital PlanetOriginally published at blog.streamr.network on October 19, 2020.News: Streamr signs pilot agreement with GSMA was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 19

Dev Update, September 2020

Welcome to the September project dev update! Streamr has earned a bit of a reputation for ending the year strong and as we round out Q3 2020 it’s clear that this year will be no exception. Here are the main dev highlights of the month:The network whitepaper is proceeding through peer review at IEEENew storage node and Cassandra cluster is up and runningFirst round of audits complete for the Data Union 2.0 smart contractsMajor refactor of the JS Client in progressNetwork cadCAD models are working with realistic random topologies.Data UnionsThe Data Union 2.0 smart contracts have now gone through the first round of security audit with no major findings. We’re fixing some minor recommendations made by the auditors, after which they will check our fixes, and the audit will be complete.Core & Client DevelopmentThe JS client is being updated to the Data Union 2.0 era. It won’t become the official release (latest tag) until 2.0 is officially launched late this year, but it is available on npm as an alpha build for builders to start trying it out.The frontend team is busy preparing the Core UI for the transition from account API keys to Ethereum private keys. This transition is an important prerequisite for the progressive decentralization of the creation and management of streams on the Network.The Network whitepaper received positive feedback during the peer review process by IEEE. That review is ongoing, with some requests for new information that we’re following up on. The exciting takeaway here is that our results and findings were not challenged during this review, giving us even more confidence in our network design.The network team made improvements to the WebRTC implementation to reduce message latency. While certainly more complex, WebRTC has the added benefit of having mechanisms to work around firewalls and NATs, thereby increasing the chance of successful peer-to-peer connections.Our collaboration with BlockScience continues. We are nearing the completion of the cadCAD modelling phase, before diving into the incentivisation modelling. Essentially, we’re developing a digital twin of the network, to be able to simulate how various parameters affect the network’s performance and security. The models are generating realistic random topologies and we are expanding them to include the message passing level. The next step is to simulate ten nodes with realistic rules and define stakeholder KPIs.Deprecations and breaking changesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.The API endpoints for explicitly deleting data have been removed. Going forward, storage nodes will expire old data based on the data retention period set on the stream./api/v1/streams/${id}/deleteDataUpTo/api/v1/streams/${id}/deleteDataRange/api/v1/streams/${id}/deleteAllDataThe API endpoints to upload CSV files to streams have been removed. Storing historical messages to streams can be done by publishing the messages to streams normally./api/v1/streams/${id}/uploadCsvFile/api/v1/streams/$id/confirmCsvFileUpload(Date TBD): Support for email/password authentication will be dropped. Users need to connect an Ethereum wallet to their Streamr user unless they’ve already done so. As part of our progress towards decentralization, we will end support for authenticating based on centralized secrets such as passwords. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for API keys will be dropped. Applications integrating to the API should authenticate with the Ethereum key-based challenge-response protocol instead of API keys. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets such as API keys. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for unsigned data will be dropped. Unsigned data on the network is not compatible with the goal of decentralization, because malicious nodes can tamper with data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Thanks for reading!If you’re a developer interested in contributing to the Streamr ecosystem, consider applying to the Streamr Data Fund for financial backing to fast track your plans.Originally published at blog.streamr.network on October 15, 2020.Dev Update, September 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 15

Data Unions: Questions on d...

1). How are Data Unions ethically different from Big Tech, seeing as they are trading user data? Yes users can consent, but why get involved in this sordid industry at allFirstly, consent is a big deal. At the moment, the data broking industry does not work on the basis of informed and overt consent. People are pressured into signing away permissions which are themselves buried on page 34, subsection B of a 62-page contract. It’s fake consent. Just shifting that, to make the whole transaction overt and informed — what we’d call rich consent — massively changes the ethical basis for engagement.Secondly, not all data monetisation is about individual targeting. I think that’s where most of the ethical issues arise. I sell data about myself, ‘Shiv Malik’, so that advertising companies out there can manipulate me. Well, I agree, that isn’t great. But there are other forms of data that are really incredibly useful.When it comes to deindividuated or aggregated data, the ethical harms posed are quite different. If I can sell my data, anonymised and part of a much larger collective, then I’m allowing third party companies to make decisions based on collective behaviour. Assuming that data cannot be disaggregated and I can not be meaningfully identified, (granted that is often hard to achieve), my own privacy is not jeopardised.Collecting behaviour information about humanity is of course what, say, universities, do all the time. In today’s world, we all depend on other people receiving decent information about society; where to put investment, what drugs work, modelling weaknesses in infrastructure, how to improve sales, where pollution is greatest. The list goes on and on. And the markets for aggregated data — where the data buyer really isn’t interested in targeting the individual, but they do care about collective behaviour — are pretty vast. If that is a sordid business, then we should shutter modern society now and go back to living in caves.Let me give some examples:A hotel maker wants to know where to invest next — data about travel patterns would be good. That’s not the same as wanting likely holidaying individuals to advertise to. Or a city needs to know about road planning. Or Tesco wants to know about footfall. Or I want to know about local pollution. Or I’m a TV producer who wants to know what people are watching on Netflix. None of these data products require that the individuals be identified and targeted as an inherent part of the product. It might happen that they are, or could be. But the product doesn’t need to know who the individuals are and how they can be targeted to be a highly useful productFinally, you should, I’d humbly suggest, really be turning your question on its head. Unfortunately, you are already involved in this “sordid” business whether you like it or not. You are already a data product to Silicon Valley. So the question really should really be this: what are you doing about it? If your answer is to try and keep washing your hands of it all, is that really going to work? You can’t Pontius Pilot your way out of this problem.When people are buying Amazon Alexas in their millions, it’s clear that the privacy movement has failed. Simply standing to one side and calling for more legislation will not itself improve people’s privacy. And it will not stop Silicon valley from monopolising the information we all create because this is not just about an individual’s privacy. It’s also about socio-economic power. They not only suck up all the capital and cash, Big Tech effectively governs our lives because of the monopolised data they collect.We hope that Data Unions lead the way in breaking those data monopolies. By creating governance structures and organisations that ensure that professional people work on your behalf and in your interest, information should only then be licensed and utilised by people with the highest ethical standards. By the way, these ideas are not just Streamr’s — they are supported by thinkers and practitioners like Jaron Lanier, the MyData movement, RadicalxChange, and very soon, we hope, the European Union.Video link:https://youtu.be/reHOBrS7szg?t=100 2). Surely user data can still be exploited for unscrupulous means by unethical data buyers? Can users choose who their data is sold to? Why not now? When?Great question. The short answer is yes, users can choose who their data is sold to. From Streamr’s technological perspective you can do it now. We’ve just implemented buyer whitelisting so if Data Union admins want to restrict sales of the product to approved buyers, they can enable that. But for end users to have their opinions recognised, application builders must implement the buyer whitelisting mechanism at their end.For example, you can simply imagine an interface asking if you’re happy to sell your data to only a charity, charities and government organisations or anyone. At the backend this would mean creating different buckets or data products on our Marketplace. It would be down to a Data Union builder to ensure they KYC’d potential buyers and that they were whitelisted to get access to only the right buckets of data where all the users who make up that real-time data stream were happy to sell to that sort of company.So it’s easy enough to do for Data Union administrators and I believe it is something Swash is working on. It’s also something which needs to be moulded into Data Union governance. Our hope is that there is supra-national legislation to deal with Data Union governance which ensures these sorts of standards must be implemented and aren’t just a ‘nice to have’.3) Are there any types of data that are off-limits to collect?This would be a personal opinion but I think there are real issues with deeply personal data that is unlikely to change over time, for example your genetic code. Of course it turns out this is getting traded all the time anyway. But for Data Unions to get into that would disturb me, because we’re far too early into this game to know the consequences with data that is so high stakes.Otherwise I’m fairly liberal. This data only gets collected if people want to have it collected. They have to take proactive actions to ensure that’s the case, like downloading an app that specifically tells you it’s there to collect this and that real-time data. (And in return — you get paid). That’s very unlike today, where that information is being taken from you. There are literally hundreds of apps — you’re bound to have one of them on your phone — that hoover up your location data. They literally know where you sleep, eat, walk, and go to the toilet. You think you’re searching for weather or finding out where the cheapest gas is, but in fact you’re supplying deeply private information.4). Are there any general TS agreements to consider for integrating, collecting and selling data from third-party devices, such as a FitBits or mobile phones?If I understand your question right then yes. There are few companies out there that don’t defend their data silos with a ring of lawyers! Many platforms are happy for third parties to integrate their services. To do that, those third parties often need access to user data. However how that user data is then licensed for use is often set out in third party developer T&Cs. (See for example subsection h. of Spotify’s T&Cs here).However the European Commision has stated that they want to change this in two ways in the next few months. Firstly, is that they want to ensure hardware manufacturers open up their device data. Secondly they want to revamp Article 20 to ensure that everyone has realtime programmatic access to their data from any platform. Hopefully that happens sooner rather than later but when it does, that is going to be a huge revolution for Data Unions and the world.A revamped Article 20 would allow people to port machine readable data from their Netflix, LinkedIn, Google or Spotify accounts for example and allow them to send those real-time streams to a DU. Buyers for such data — stripped of personally identifying information -might be bought by production houses looking to create better TV programmes, recruitment agencies, developers looking to create better map applications, or developers/musicians looking to create a cooperative music platform alternative.5). How are regional differences in data collection laws managed?To be honest, this is still a bit of an unknown, and one for Data Union admins, rather than Streamr itself. Of course we have US and EU policy/legal experts to draw upon from our Data Union advisory network. And like with other parts of the Data Union building experience we’ll be looking to integrate basic best practice information into the general Data Union builder resource pool pretty shortly.6). Do you have any guidelines on price setting and market value estimation?Every data product is going to be worth something completely different so it’s a bit pointless trying to give guidance based on guesstimates. Of course that doesn’t mean that those numbers can’t be known. For most Data Union products, there will likely be an already existing (if nascent and under the table) market to draw pricing expectations from. We worked like that to help Swash price it’s data and we would happily work with other viable projects to find those answers.7). Can Data Unions themselves be sold (ownership transfer)?That is a REALLY good question and one that I am concerned about. As one person put it in our market research we conducted at the start of the year, what’s the point in contributing to building a Data Union if they just get bought out by Google?So getting this right is going to be a multi-pronged strategy. Firstly, from the ground up, the Data Union builders themselves need to bind themselves into the right structures. Cooperatives (and Data Unions are a sort of platform cooperative) are meant to be owned by users The issue with them is that it is always hard to raise investment from a small bunch of prospective users to the point that it can compete with larger commercial enterprises. The cooperativist Nathan Schnieder believes he has answers to this, which is why we’ve been working with him on those solutions.Secondly, Data Unions need to be regulated within the next few years. In return for legally being the only type of organisation that should be able to handle the licensing of consumer data, they should be responsive to their users and have cooperative equity structures so companies can’t be easily sold without members saying so.On the other hand Data Unions should also be subject to the same provisions as any other platform when it comes to porting data so, since they can be fairly easy to build, users should be able to port their information and streams to new Data Unions if serious governance issues do arise.Is there more that tech can bind in the equity from being captured by adversarial interests? Yes! That’s also why we are working with DAOstack and other DAO builders to get a Data Union DAO off the ground as a PoC. Now THAT is really exciting.8). How do I prove the authenticity of Data Union data?As a buyer? As we know better now, data buying does not, and is likely to never happen at the click of a button. Organisations that purchase data spend tens of thousands of dollars on it and won’t simply click a button. They do their due diligence regardless of the tech on offer. It is up to Data Union builders to ensure their data products are secure, refined, and provide clean feeds of information, otherwise everyone loses. Swash, in their latest release has improved that for example, when they introduced a Captcha button to deter bots. There are of course tools that we can integrate into the Streamr Data Union framework but for now we anticipate that the open source community will fill this gap.9). What level of support can Streamr provide in setting up a Data Union?Building a Data Union is not an easy process. Partly it’s about the tech and we are absolutely here for that part of the journey. At the ground level, we are always improving our developer tooling, video tutorials and technical documentation. Our Growth team, including our Head of Dev Rel, Matthew Fontana, is also always happy to jump on a video call to guide you through any technical issue you may be having or to just chat through a business idea. Our developer forum is also there as a repository of information for past learnings.We also have a Community Fund which can provide substantial financial support right the way from idea to the point where you get VC funding.But as our first Data Union, Swash, has grown, we’ve realised that building up your user base and also connecting with potential data buyers are also very much integral parts of building a Data Union that we also have to support. That’s why we have extra resources available including ground breaking market research and access to our Data Union advisory board who not just believe in Data Unions but who can provide advice and mentorship on; growing your user base, legal and policy issues, and negotiating data sales.10). Do Data Unions need to be open source?Being open source is important to build trust with the users of the Data Union but is not a strict requirement of Data Unions. We’re glad that Swash has done this and we will always encourage other Data Union builders to do the same.Originally published at https://blog.streamr.network on October 7, 2020.Data Unions: Questions on data selling was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 08

It’s time to build Data Unions

We can probably all agree that what’s been happening so far in 2020 has been unprecedented. From the pandemic, to wildfires, to the tensions around the upcoming presidential elections in the US — our control over the events around us seems to be slipping away, and power imbalances, whether from big corporations or political entities, are on the rise.But corporate control and political power are also a matter of infrastructure and how the systems around us are designed. In today’s turbulent times, access to accurate data is one of the biggest assets for communities and businesses. This becomes increasingly harder as big corporations silo off the information we all create on a daily basis. With the revolutionary Data Union framework, Streamr seeks to turn the current information asymmetry upside down and democratise the sharing and monetising of data flows for everyone.This is not to say that Data Unions will change all of our problems over night, but they nevertheless constitute an important building block, a tool that we’re giving to our community to start creating more open, more democratic flows of information.Data Unions are an ethical new way to sell user data, through the Streamr peer-to-peer real-time data network. By integrating into the Data Union framework, or building a Data Union app, interested developers can easily bundle and crowdsell the real-time data that their users generate, gain meaningful consent from users and reward them by sharing data sales revenue. By building Data Unions we can create open data ecosystems.In order to facilitate the building of Data Unions, we are launching the Streamr Data Challenge, a two month long Hackathon, in cooperation with Lumos Labs. During the program we invite more than 200 India-based programmers, designers and entrepreneurs to innovate and create utilising the StreamrData Union framework.We also invite builders from around the world to apply for grants through the Streamr Data Fund. Currently there are 7,500,000 DATAcoins in the fund . Head to the Streamr developer forum if you want to learn more about seed funding opportunities, or share your ideas and get inspired by other Data Union builders.Data Unions are more important now than ever. We are very excited to witness recent developments at the European Commission. These developments point to a future in which Europe will make Data Unions the new normal, rather than Google and Facebook’s oligopoly. Through the planned Data Intermediary Certification Scheme, which will most likely be introduced in a light-weight version as early as 2021, Data Unions can become official players within the data economy. This will hopefully also encourage greater political and societal interest in new approaches to democratising access to data, and a rising demand in transparent data sharing models.So, what are you waiting for? It’s time to build Data Unions!Originally published at blog.streamr.network on October 5, 2020.It’s time to build Data Unions was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 10. 05

Let’s talk about Data Unions

Streamr’s Head of Growth, Shiv Malik recently held an AMA on Data Unions for the GAINS Telegram community. Their questions led to an insightful discussion that we’ve condensed into this blog post.What is the project about in a few simple sentences?At Streamr we are building a real-time network for tomorrow’s data economy. It’s a decentralized, peer-to-peer network which we are hoping will one day replace centralized message brokers like Amazon’s AWS services. On top of that, one of the things I’m most excited about is Data Unions. With Data Unions anyone can join the data economy and start earning money from the data they already produce. Streamr’s Data Union framework provides a really easy way for devs to start building their own data unions and can also be easily integrated into any existing apps.Okay, sounds interesting. Do you have a concrete example you could give us to make it easier to understand?The best example of a Data Union is the first one that has been built out of our stack. It’s called Swash and it’s a browser plugin.You can download it here in a few clicks.Basically it helps you monetise the data you already generate (day in day out) as you browse the web. It’s the sort of data that Google already knows about you. But this way, with Swash, you can actually monetise it yourself.The more people that join the Data Union, the more powerful it becomes and the greater the rewards are for everyone as the data product sells to potential buyers.https://medium.com/media/3a8fab401a95145f9f01212c3c021a6b/hrefVery interesting. What stage is the project/product at? It’s live, right?Yes. It’s currently live in public beta. And the Data Union framework will be launched in just a few weeks. The Network is on course to be fully decentralized at some point next year.How much can a regular person browsing the internet expect to make for example?So that’s a great question. The answer is, no-one quite knows yet. We do know that this sort of data (consumer insights) is worth hundreds of millions and really isn’t available in high quality. So, with a Data Union of a few million people, everyone could be getting 20–50 USD a year. But it’ll take a few years at least to realise that growth. Of course Swash is just one Data Union amongst many possible others (which are now starting to get built out on our platform!)With Swash, they now have 3186 members. They need to get to 50,000 before they become really viable but they are yet to do any marketing. So all that is organic growth.You can explore these numbers in more detail by downloading an executive summary research commissioned to investigate the market and consumer attitudes towards Data Unions.I assume the data is anonymised, by the way?Yes. And there in fact a few privacy protecting tools Swash supplies to its users.How does Swash compare to Brave?So Brave offers a consent model where users are rewarded if they opt in to see selected ads targeted to them from their browsing history. They don’t sell your data as such.Swash can of course be a plugin with Brave and therefore you can make passive income browsing the internet. Whilst also then consenting to advertising if you so want to earn BAT.Of course, it’s Streamr that is powering Swash. And we’re looking at powering other Data Unions, say for example mobile applications.The holy grail might be having already existing apps and platforms out there, integrating Data Union tech into their apps so people can consent (or not) to having their data sold. And then getting a cut of that revenue when it does sell.The other thing to recognise is that the Big Tech companies monopolise data on a vast scale. Data that we of course produce for them. That monopoly is stifling innovation.Take for example a competitor map app. To effectively compete with Google Maps or Waze, they need millions of users feeding real time data into it.Without that,it’s like Google maps used to be; static and a bit useless.Right, so how do you convince these Big Tech companies that are producing these big apps to integrate with Streamr? Does it mean they wouldn’t be able to monetise data as well on their end if it becomes more available through an aggregation of individuals?If a map application does manage to scale to that level then inevitably Google buys them out -that’s what happened with Waze. But if you have a Data Union that bundles together the raw location data of millions of people then any application builder can come along and license that data for their app. This encourages all sorts of innovation and breaks the monopoly.We’re currently having conversations with Mobile Network operators to see if they want to pilot this new approach to data monetisation. And that’s what’s even more exciting. Just be explicit with users: do you want to sell your data? Okay, if yes, then which data point do you want to sell?The mobile network operator (like T-mobile for example) can then organise the sale of the data of those who consent and everyone gets a cut.Streamr, in this example, provides the backend to port and bundle the data, and also the token and payment rail for the payments.So for big companies (mobile operators in this case), it’s less logistics, handing over the implementation to you, and simply taking a cut?It’s a vision that we’ll be able to talk more about more concretely in a few weeks time 😁Compared to having to make sense of that data themselves (in the past) and selling it themselvesSort of.We provide the backend to port the data and the template smart contracts to distribute the payments.They get to focus on finding buyers for the data and ensuring that the data that is being collected from the app is the kind of data that is valuable and useful to the world.(Through our sister company TX, we also help build out the applications for them and ensure a smooth integration).The other thing to add is that the reason why this vision is working, is that the current, deeply flawed, data economy is under attack. Not just from privacy laws such as GDPR, but also from Google shutting down cookies, bidstream data being investigated by the FTC (for example) and Apple making changes to iOS 14 to make third party data sharing more explicit for users.All this means that the only real places for thousands of multinationals to buy the sort of consumer insights they need to ensure good business decisions will be owned by Google/FB etc, or from SDKs or through the Data Union method; overt, rich, consent from the consumer in return for a cut of the earnings.What is the token use case? How did you make sure it captures the value of the ecosystem you’re building?The token is used for payments on the Marketplace (such as for Data Union products for example) also for the broker nodes in the Network. (we haven’t talked much about the P2P network but it’s our project’s secret sauce).The broker nodes will be paid in DATAcoin for providing bandwidth. We are currently working together with BlockScience on our tokeneconomics. We’ve just started the second phase in their consultancy process and will be soon able to share more on the Streamr Network’s tokeneconoimcs.But if you want to summate the Network in a sentence or two — imagine the Bittorrent network being run by nodes who get paid to do so. Except that instead of passing around static files, it’s real-time data streams.That of course means it’s really well suited for the IoT economy.The latest developments on tokenomics were discussed in a recent AMA with Streamr CEO, Henri Pihkala:Can the Streamr Network be used to transfer data from IoT devices? Is the network bandwidth sufficient? How is it possible to monetise the received data from a huge number of IoT devices?Yes, IoT devices are a perfect use case for the Network. When it comes to the network’s bandwidth and speed, the Streamr team just recently did an extensive research to find out how well the network scales.The result was that it is on par with centralized solutions. We ran experiments with network sizes between 32 to 2048 nodes and in the largest network of 2048 nodes, 99% of deliveries happened within 362 ms globally.To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! So we’re super happy with those results.Here’s a link to the paper.Yes, the messages in the Network are encrypted. Currently all nodes are still run by the Streamr team. This will change in the Brubeck release — our last milestone on the roadmap — when end-to-end encryption is added. This release adds end-to-end encryption and automatic key exchange mechanisms, ensuring that node operators can not access any confidential data.If, by the way, you want to get very technical the encryption algorithms we are using are: AES (AES-256-CTR) for encryption of data payloads, RSA (PKCS #1) for securely exchanging the AES keys and ECDSA (secp256k1) for data signing (same as Bitcoin and Ethereum).Streamr has three Data Unions; Swash, Tracey and MyDiem. Why does Tracey help fisherfolk in the Philippines monetize their catch data? Do they only work with this country or do they plan to expand?So yes, Tracey is one of the first Data Unions on top of the Streamr stack. Currently we are working together with the WWF-Philippines and the UnionBank of the Philippines on doing a first pilot with local fishing communities in the Philippines.WWF is interested in the catch data to protect wildlife and make sure that no overfishing happens. And at the same time the fisherfolk are incentivised to record their catch data by being able to access micro loans from banks, which in turn helps them make their business more profitable.So far, we have lots of interest from other places in South East Asia which would like to use Tracey, too. In fact TX have already had explicit interest in building out the use cases in other countries and not just for sea-food tracking, but also for many other agricultural products.Are there plans in the pipeline for Streamr to focus on the consumer-facing products themselves or will the emphasis be on the further development of the underlying engine?We’re all about what’s under the hood. We want third party devs to take on the challenge of building the consumer facing apps. We know it would be foolish to try and do it all!We all know that Blockchain has many disadvantages as well, so why did Streamr choose blockchain as a combination for its technology? What’s your plan to merge Blockchain with your technologies to make it safer and more convenient for your users?So we’re not a blockchain ourselves — that’s important to note. The P2P network only uses BC tech for the payments. Why on earth, for example, would you want to store every single piece of info on a blockchain. You should only store what you want to store. And that should probably happen off chain.So we think we got the mix right there.How does the Streamr team ensure good data is entered into the blockchain by participants?Another great question there! From the product-buying end, this will be done by reputation. But ensuring the quality of the data as it passes through the network — if that is what you also mean — is all about getting the architecture right. In a decentralized network, that’s not easy as data points in streams have to arrive in the right order. It’s one of the biggest challenges but we think we’re solving it in a really decentralized way.What are the requirements for integrating applications with Data Union? What role does the DATA token play in this case?There are no specific requirements as such, just that your application needs to generate some kind of real-time data. Data Union members and administrators are both paid in DATA by data buyers coming from the Streamr marketplace.Regarding security and legality, how does Streamr guarantee that the data uploaded by a given user belongs to him and he can monetise and capitalise on it?So that’s a sort of million dollar question for anyone involved in a digital industry. Within our system there are ways of ensuring that but in the end the negotiation of data licensing will still, in many ways be done human to human and via legal licenses rather than smart contracts. at least when it comes to sizeable data products. There are more answers to this, but it’s a long one!Originally published at blog.streamr.network on September 23, 2020.Let’s talk about Data Unions was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 23

Dev Update July, August 2020

Although summer is usually a time to take things slow, this year we decided to lean in, ship, publish, and as always, make steady progress towards our longer-term vision of full decentralization. Here are the some of the highlights:Network whitepaper published. The real-world experiments show it’s fast and scalable. This blog highlights some key findings, and you can also check out the full whitepaper.Launched the website update with an updated top page, a new Data Unions page and Papers page.Data Unions 2.0 smart contracts are now ready and undergoing a third party security audit. Remaining work consists of loose ends, such as the SDKs and Core application, as well as creating an upgrade path for existing DUs.Data Unions public beta running smoothly. Incremental improvements were made in preparation for the official launch.Started work on the Network Explorer, which shows the real-time structure and stats of the Streamr Network.Started work on human-readable, hierarchical, globally unique stream IDs with namespaces based on ENS, for example streamr.eth/demos/tramdata.Storage rewrite complete, now setting up the new storage cluster in production. Will fix resend problems and prepare for opening up and decentralizing the storage market.Token economics research with BlockScience continues in Phase 2, working on simple cadCAD models.End-to-end encryption key exchange ready in Java SDK, while JS SDK is still WIP.Buyer whitelisting feature added to the Marketplace.Network findingsReleasing the Network whitepaper marks the completion of our academic research phase of the current Network milestone. This research is especially important to the Streamr project’s enterprise adoption track, and focused on the latency and scalability of the network, battle tested with messages propagated through real-world data centres around the world. The key findings were:The upper limit of message latency is roughly around 150–350ms globally, depending on network sizeMessage latency is predictableThe relationship between network size and latency is logarithmic.These findings are impressive! Not only do they show that the Network is already on par with centralized message brokers in terms of speed, they also give us great confidence that the fully decentralized network can scale without introducing significant message propagation latency. We invite you to read the full paper to learn more.Network DevelopmentsWhile the release of the Network whitepaper has been a long-term side project for the Network team, development of the Network continues to accelerate. As real-time message delivery is the primary function of the Network, so far we haven’t focused much on decentralizing the storage of historical messages. However, as the whole Network is heading towards decentralization, so is the storage functionality. The long-term goal regarding storage is that anyone will be able to join in and a storage node. Stream owners will be able to hire one or more of these independent storage nodes to store the historical data in their streams. The completion of the storage rewrite is another big step towards full decentralization.Token economics researchThe token economics research track with BlockScience has proceeded to Phase 2. In Phase 1, mathematical formulations of the actors, actions, and agreements in the Network were created. In the current Phase 2, simulation code is being written for the first time. The simulations leverage the open source cadCAD framework developed by BlockScience. The models developed in Phase 2 are simple toy models, the purpose of which is to play around with the primitives defined in Phase 1 and verify that they are implemented correctly. In Phase 3, the first realistic models of the Streamr Network economy will be implemented.Data Unions upgradeOn the Data Unions front, development of the 2.0 architecture is progressing well and the smart contracts are being security audited at the moment. Robustness and security have been the key drivers for this upgrade, and while 1.0 architecture is running smoothly, we need to be forward-thinking and prepare for the kind of scale and growth we expect to see in the future. Data Unions 2.0 will be the first big upgrade after the launch of the current architecture. Data Unions that are created with the current architecture will be upgradable to the Data Unions 2.0 architecture once available. We look forward to describing the upgrade in detail in a future blog post.More control over your dataWe released a heavily requested feature on the Marketplace — buyer whitelisting. This feature allows data product owners and Data Union admins to be in control of who can purchase and gain access to the product’s data. These features are useful in growing enterprise adoption of the Marketplace, because in B2B sales it’s often required that the transacting parties identify each other and perhaps sign traditional agreements.Deprecations and breaking changesA number of API endpoints need to be retired and replaced to be compatible with our vision of decentralization. This section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ will be happening in the medium term, but a date has not yet been set.The API endpoints for explicitly deleting data will be removed on the next update, because they are rarely used and are not compatible with decentralized storage. Going forward, storage nodes will expire old data based on the data retention period set on the stream./api/v1/streams/${id}/deleteDataUpTo/api/v1/streams/${id}/deleteDataRange/api/v1/streams/${id}/deleteAllDataThe API endpoints to upload CSV files to streams will be removed in the next update, because the feature is rarely used and the centralized backend is unable to sign the data on behalf of the user. Storing historical messages to streams can be done by publishing the messages to streams normally./api/v1/streams/${id}/uploadCsvFile/api/v1/streams/$id/confirmCsvFileUpload(Date TBD): Support for email/password authentication will be dropped. Users need to connect an Ethereum wallet to their Streamr user unless they’ve already done so. As part of our progress towards decentralization, we will end support for authenticating based on centralized secrets such as passwords. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for API keys will be dropped. Applications integrating to the API should authenticate with the Ethereum key-based challenge-response protocol instead of API keys. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets such as API keys. Going forward, authenticating with cryptographic keys/wallets will be the only supported method of authentication.(Date TBD): Support for unsigned data will be dropped. Unsigned data on the Network is not compatible with the goal of decentralization, because malicious nodes can tamper with data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Thanks for reading! If you’re a developer interested in contributing to the Streamr ecosystem, consider applying to the Community Fund for financial backing to fast track your plans.Originally published at blog.streamr.network on September 15, 2020.Dev Update July, August 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 15

News: Streamr appoints new ...

The Streamr project is pleased to announce that Matthew Fontana has been appointed as Head of Developer Relations. Matthew will be starting the role a few weeks before the expected public launch of the Streamr Data Union framework — where developers can build applications that enable users to control, monetise and license their data in tandem with thousands of others.“As a former front end developer for Streamr, Matthew was the perfect candidate to take on the role,” said Streamr co-founder, Henri Pihkala. “Not only does he know our technology stack inside out, and have the requisite deep understanding of crypto, he also has great presentation and teamwork skills, and has already been instrumental in the creation of the developer docs and video tutorials we have today. I welcome him to the new position and really look forward to watching our ecosystem grow under his lead.”New appointee Matthew Fontana said, “I’m excited to join the Growth team in my new role, and I’m really looking forward to inspiring developers and giving them the confidence to build on the Streamr stack. My goal is to ensure that the Streamr developer ecosystem becomes as expansive as possible. Our tech is bleeding edge and the platform is truly empowering in every sense, so I expect to be pretty busy.”Streamr’s new Head of Developer Relations, Matthew FontanaStreamr’s Head of Growth, Shiv Malik said: “As long-standing members of the Streamr project, Matthew and I already have a close working relationship, so I’m really looking forward to working with him on a day-to-day basis. One thing I’ve always appreciated is that whenever there is pressure of a deadline, Matthew has always brought a calm and quiet air of diligence and expertise to the moment. The Developer Relations role is now more important than ever. Matthew’s predecessor, Weilei Yu, created a fantastic foundation over the last year-and-a-half; helping to establish an ecosystem, a community forum, the basic developer documentation and of course helping Swash and others get established.As part of the wider Growth & Marketing team effort, Matthew will no doubt take the Streamr ecosystem, with a current focus on building out several more Data Union startups, to new heights over the next few years.”If you are a third-party developer looking to learn more about what the Streamr stack can do for you, contact Matthew Fontana via:The community forumTelegram @matthew_streamrTwitter @mattofontanaLinkedIn matthewjfontanaOr via email matthew.fontana@streamr.networkOriginally published at blog.streamr.network on September 4, 2020.News: Streamr appoints new Head of Developer Relations was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 07

How to create a Data Union

Data Unions are more than just a new data monetisation strategy- they are the beginning of a new relationship between creators and their users. This post serves as a getting started guide for those creators that are ready to get building.Data Unions (DUs) enable creators to share data sales revenue with users via crowdsourced, scalable data sets, generated by the users of their apps and services. DUs rest proudly on top of the Streamr and Ethereum stacks.Under the hood,Ethereum is used to store and transfer value,The Streamr Network transports the real-time data,The Streamr Core app is used to build and manage the DU contract, and,The Streamr Marketplace monetises the data.How to start a Data Union? — Here are the four steps:Define the sort of data you’ll be streaming to your DU.Deploy the DU contract on Ethereum.Integrate your end user app.Publish the DU on the marketplace.I will briefly explain these steps, and if you prefer, you can also get to know the process by watching me create a DU in the screencast series, or by reading the DU docs. The accompanying demonstration GitHub repo of example code can also be found here.1. Define the sort of data you’ll be streaming to your DUAs the DU creator you’ll first need to decide what sort of data will be included into the DU and how to model that data into streams. A firehose approach is typical and we have some general advice on that topic in the streams section of the docs.2. Deploy the DU contract on EthereumThis part requires some crypto basics. If it’s your first time, please check out the Getting Started section of the docs.Using the Streamr Core interface you will be customising the parameters of the DU contract such as the price of the data and the revenue share percentage.https://medium.com/media/29361d8a764e71413c161ccb1df1221a/href3. Integrate your end user appUsing one of Streamr’s client libraries is highly recommended. The essential functionality such as member balance checks and member withdrawals are wrapped in easy to use library method calls.4. Publish the DU on the marketplaceIf you’ve gotten this far, this step is a breeze. It’s a one-click publish Ethereum transaction to have your DU available for purchase on the Marketplace.https://medium.com/media/7952f4785952b88a58770e2524cf5bba/href🎉 Congrats! You’re all set. 🎉The Docs go much deeper into the implementation details and we encourage you to reach out on the developer forums to share your experience with the platform.Originally published at blog.streamr.network on September 1, 2020.How to create a Data Union was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 09. 02

Streamr Network: Performanc...

The Corea milestone of the Streamr Network went live in late 2019. Since then a few people in the team have been working on an academic whitepaper to describe its design principles, position it with respect to prior art, and prove certain properties it has. The paper is now ready, and it has been submitted to the IEEE Access journal for peer review. It is also now published on the new Papers section on the project website. In this blog, I’ll introduce the paper and explain its key results. All the figures presented in this post are from the paper.The reasons for doing this research and writing this paper were simple: many prospective users of the Network, especially more serious ones such as enterprises, ask questions like ‘how does it scale?’, ‘why does it scale?’, ‘what is the latency in the network?’, and ‘how much bandwidth is consumed?’. While some answers could be provided before, the Network in its currently deployed form is still small-scale and can’t really show a track record of scalability for example, so there was clearly a need to produce some in-depth material about the structure of the Network and its performance at large, global scale. The paper answers these questions.Another reason is that decentralized peer-to-peer networks have experienced a new renaissance due to the rise in blockchain networks. Peer-to-peer pub/sub networks were a hot research topic in the early 2000s, but not many real-world implementations were ever created. Today, most blockchain networks use methods from that era under the hood to disseminate block headers, transactions, and other events important for them to function. Other megatrends like IoT and social media are also creating demand for new kinds of scalable message transport layers.The latency vs. bandwidth tradeoffThe current Streamr Network uses regular random graphs as stream topologies. ‘Regular’ here means that nodes connect to a fixed number of other nodes that publish or subscribe to the same stream, and ‘random’ means that those nodes are selected randomly.Random connections can of course mean that absurd routes get formed occasionally, for example a data point might travel from Germany to France via the US. But random graphs have been studied extensively in the academic literature, and their properties are not nearly as bad as the above example sounds — such graphs are actually quite good! Data always takes multiple routes in the network, and only the fastest route counts. The less-than-optimal routes are there for redundancy, and redundancy is good, because it improves security and churn tolerance.There is an important parameter called node degree, which is the fixed number of nodes to which each node in a topology connects. A higher node degree means more duplication and thus more bandwidth consumption for each node, but it also means that fast routes are more likely to form. It’s a tradeoff; better latency can be traded for worse bandwidth consumption. In the following section, we’ll go deeper into analyzing this relationship.Network diameter scales logarithmicallyOne useful metric to estimate the behavior of latency is the network diameter, which is the number of hops on the shortest path between the most distant pair of nodes in the network (i.e. the “longest shortest path”. The below plot shows how the network diameter behaves depending on node degree and number of nodes.Network diameterWe can see that the network diameter increases logarithmically (very slowly), and a higher node degree ‘flattens the curve’. This is a property of random regular graphs, and this is very good — growing from 10,000 nodes to 100,000 nodes only increases the diameter by a few hops! To analyse the effect of the node degree further, we can plot the maximum network diameter using various node degrees:Network diameter in network of 100 000 nodesWe can see that there are diminishing returns for increasing the node degree. On the other hand, the penalty (number of duplicates, i.e. bandwidth consumption), increases linearly with node degree:Number of duplicates received by the non-publisher nodesIn the Streamr Network, each stream forms its own separate overlay network and can even have a custom node degree. This allows the owner of the stream to configure their preferred latency/bandwidth balance (imagine such a slider control in the Streamr Core UI). However, finding a good default value is important. From this analysis, we can conclude that:The logarithmic behavior of network diameter leads us to hope that latency might behave logarithmically too, but since the number of hops is not the same as latency (in milliseconds), the scalability needs to be confirmed in the real world (see next section).A network diameter of 4 yields good latency/bandwidth balance, and we have selected this as the default value in the Streamr Network. This value is also used in all the real-world experiments described in the next section.It’s worth noting that in such a network, the bandwidth requirement for publishers is determined by the node degree and not the number of subscribers. With a node degree 4 and a million subscribers, the publisher only uploads 4 copies of a data point, and the million subscribing nodes share the work of distributing the message among themselves. In contrast, a centralized data broker would need to push out a million copies.Latency scales logarithmicallyTo see if actual latency scales logarithmically in real-world conditions, we ran large numbers of nodes in 16 different Amazon AWS data centers around the world. We ran experiments with network sizes between 32 to 2048 nodes. Each node published messages to the network, and we measured how long it took for the other nodes to get the message. The experiment was repeated 10 times for each network size.The below image displays one of the key results of the paper. It shows a CDF (cumulative distribution function) of the measured latencies across all experiments. The y-axis runs from 0 to 1, i.e. 0% to 100%.CDF of message propagation delayFrom this graph we can easily read things like: in a 32 nodes network (blue line), 50% of message deliveries happened within 150 ms globally, and all messages were delivered in around 250 ms. In the largest network of 2048 nodes (pink line), 99% of deliveries happened within 362 ms globally.To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! Decentralization comes with unquestionable benefits (no vendor lock-in, no trust required, network effects, etc.), but if such protocols are inferior in terms of performance or cost, they won’t get adopted. It’s pretty safe to say that the Streamr Network is on par with centralized services even when it comes to latency, which is usually the Achilles’ heel of P2P networks (think of how slow blockchains are!). And the Network will only get better with time.Then we tackled the big question: does the latency behave logarithmically?Mean message propagation delay in Amazon experimentsAbove, the thick line is the average latency for each network size. From the graph, we can see that the latency grows logarithmically as the network size increases, which means excellent scalability.The shaded area shows the difference between the best and worst average latencies in each repeat. Here we can see the element of chance at play; due to the randomness in which nodes become neighbours, some topologies are faster than others. Given enough repeats, some near-optimal topologies can be found. The difference between average topologies and the best topologies gives us a glimpse of how much room for optimisation there is, i.e. with a smarter-than-random topology construction, how much improvement is possible (while still staying in the realm of regular graphs)? Out of the observed topologies, the difference between the average and the best observed topology is between 5–13%, so not that much. Other subclasses of graphs, such as irregular graphs, trees, and so on, can of course unlock more room for improvement, but they are different beasts and come with their own disadvantages too.It’s also worth asking: how much worse is the measured latency compared to the fastest possible latency, i.e. that of a direct connection? While having direct connections between a publisher and subscribers is definitely not scalable, secure, or often even feasible due to firewalls, NATs and such, it’s still worth asking what the latency penalty of peer-to-peer is.Relative delay penalty in Amazon experimentsAs you can see, this plot has the same shape as the previous one, but the y-axis is different. Here, we are showing the relative delay penalty (RDP). It’s the latency in the peer-to-peer network (shown in the previous plot), divided by the latency of a direct connection measured with the ping tool. So a direct connection equals an RDP value of 1, and the measured RDP in the peer-to-peer network is roughly between 2 and 3 in the observed topologies. It increases logarithmically with network size, just like absolute latency.Again, given that latency is the Achilles’ heel of decentralized systems, that’s not bad at all. It shows that such a network delivers acceptable performance for the vast majority of use cases, only excluding the most latency-sensitive ones, such as online gaming or arbitrage trading. For most other use cases, it doesn’t matter whether it takes 25 or 75 milliseconds to deliver a data point.Latency is predictableIt’s useful for a messaging system to have consistent and predictable latency. Imagine for example a smart traffic system, where cars can alert each other about dangers on the road. It would be pretty bad if, even minutes after publishing it, some cars still haven’t received the warning. However, such delays easily occur in peer-to-peer networks. Everyone in the crypto space has seen first-hand how plenty of Bitcoin or Ethereum nodes lag even minutes behind the latest chain state.So we wanted to see whether it would be possible to estimate the latencies in the peer-to-peer network if the topology and the latencies between connected pairs of nodes are known. We applied Dijkstra’s algorithm to compute estimates for average latencies from the input topology data, and compared the estimates to the actual measured average latencies:Mean message propagation delay in Amazon experimentsWe can see that, at least in these experiments, the estimates seemed to provide a lower bound for the actual values, and the average estimation error was 3.5%. The measured value is higher than the estimated one because the estimation only considers network delays, while in reality there is also a little bit of a processing delay at each node.ConclusionThe research has shown that the Streamr Network can be expected to deliver messages in roughly 150–350 milliseconds worldwide, even at a large scale with thousands of nodes subscribing to a stream. This is on par with centralized message brokers today, showing that the decentralized and peer-to-peer approach is a viable alternative for all but the most latency-sensitive applications.It’s thrilling to think that by accepting a latency only 2–3 times longer than the latency of an unscalable and insecure direct correction, applications can interconnect over an open fabric with global scalability, no single point of failure, no vendor lock-in, and no need to trust anyone — all that becomes available out of the box.In the real-time data space, there are plenty of other aspects to explore, which we didn’t cover in this paper. For example, we did not measure throughput characteristics of network topologies. Different streams are independent, so clearly there’s scalability in the number of streams, and heavy streams can be partitioned, allowing each stream to scale too. Throughput is mainly limited, therefore, by the hardware and network connection used by the network nodes involved in a topology. Measuring the maximum throughput would basically be measuring the hardware as well as the performance of our implemented code. While interesting, this is not a high priority research target at this point in time. And thanks to the redundancy in the network, individual slow nodes do not slow down the whole topology; the data will arrive via faster nodes instead.Also out of scope for this paper is analysing the costs of running such a network, including the OPEX for publishers and node operators. This is a topic of ongoing research, which we’re currently doing as part of designing the token incentive mechanisms of the Streamr Network, due to be implemented in a later milestone.I hope that this blog has provided some insight into the fascinating results the team uncovered during this research. For a more in-depth look at the context of this work, and more detail about the research, we invite you to read the full paper.If you have an interest in network performance and scalability from a developer or enterprise perspective, we will be hosting a talk about this research in the coming weeks, so keep an eye out for more details on the Streamr social media channels. In the meantime, feedback and comments are welcome. Please add a comment to this Reddit thread or email contact@streamr.network.Originally published at blog.streamr.network on August 24, 2020.Streamr Network: Performance and Scalability Whitepaper was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 08. 25

Blockchain won’t solve your...

Blockchain won’t solve your traceability issues if you’re not capturing accurate data — the TX approach to understanding the problem spaceBetween 2000 to 2018, the global value of goods exported has shot from 6.45 trillion to 19.5 trillion U.S. dollars. With an increasing demand for exported products, there is also an increasing demand for accurate traceability data. According to the 2019 Food and Health Survey, nearly two-thirds of consumers said recognising the ingredients in a product impacted their buying decisions. Food labels are becoming more important than ever, as consumers increasingly seek information about the ingredients that go into their food. It’s not just the ingredients themselves consumers are starting to demand. TE Food The Trusted Food On Blockchain suggests,“The pure presentation of traceability information will shift to telling the “story of the food” in a way which the consumers can easily absorb. Attaching photos, videos, inspection documents, nutrition data will make the journey of the food more interesting”.There is huge demand for traceability solutions in a multitude of industries, including agriculture, fisheries, aggregates, and high value products such as diamonds and alcoholic spirits, to name a few. With the hype of blockchain over the last few years, numerous companies are now offering blockchain enabled traceability solutions, but have failed to improve the overall quality of data being captured and shared. Rightly so, this has led to some fatigue on the use of blockchain in supply chain cases.This is because a blockchain does not solve traceability issues. A blockchain does play a key role in traceability, as it ensures the data logged is not tampered with once it has been saved to the blockchain. But the fundamental problem that must be solved before data is entered onto the chain is its accuracy and correctness. If the data was inaccurate before it was saved to the blockchain, it will continue to be inaccurate when you come back to access it. Without verification, a blockchain serves as an immutable ledger of garbage data that cannot be deleted. The issues surrounding the quality of data must be solved before it is placed on the chain.“If the data was inaccurate before it was saved to the blockchain, it will continue to be inaccurate when you come back to access it.”This is where we do things differently at TX Tomorrow Explored. Our services commence with an activity we refer to as an Assess study. We want to offer our clients more than a software solution, we want to consider the space around the technical problem to understand all the pain points before making recommendations. To make this possible, we have structured our services in such a way that we first analyse the problem space so we can get to the heart of the issue, before we start discussing the software solution. This approach is a result of our team composition. The Assess phase is delivered by a combination of business consultants, service designers and developers. This means that in our analysis, we truly consider both the technical and business aspects of the problem. Issues like the one mentioned above concerning verification are more likely to be identified when analysis is done from a variety of angles — industry, business and technical.“Issues like the one mentioned above concerning verification are more likely to be identified when analysis is done from a variety of angles — industry, business and technical.”In one of our signature projects, Tracey — a traceability and trade data application used by fisherfolk in the Philippines — we performed an Assess with our partners at WWF, UnionBank and Streamr. In the Assess, we identified the need for a verification solution to support the use of a blockchain-enabled ledger for capturing and disseminating catch information. As a result, Tracey includes functionality that provides fisherfolk with incentives for providing data that has been verified. This makes Tracey a more complete solution for gathering data in the “first mile” of the supply chain. The solution can be applied in other industries that face challenges in ensuring the accuracy of data captured in the first mile.Additional content: Listen to TX Podcast with UnionBank on how Tracey is helping unbanked fisherfolk gain access to microfinancingWhat does the Assess involve?Let’s assume you have identified a problem or business opportunity that needs to be addressed with a traceability solution. The first thing we need to do is validate this hypothesis with an Assess study. This low cost activity will give you some level of educated reassurance that a traceability solution is going to solve and deliver the benefits you desire before investing too heavily in a particular software. The Assess work can last anywhere from 2 to 6 weeks depending on the complexity of the project. We do this work in close collaboration with our client — in addition to validating and better understanding the problem, we also want to ensure there is a close alignment on the end vision and objectives as we work through the process. The Assess activity includes:Data Value Chain Analysis: This is the main activity that involves conducting primary research in the form of focus interviews and workshops with the actors in the value chain, coupled with a desktop research investigation on factors surrounding the value chain, such as compliance laws for exporting products and other relevant restrictions that need to be complied with. Undertaking this activity allows us to build a picture of the problem space like the one in figure 1 below.Digital strategy: This means how we transition from what you have today to what you’re aiming to have at the end of the project. Effectively, this is a roadmap with key activities identified, which will guide you through the problem space to achieving your objectives.Decision gateway: This is an open discussion on the pros and cons of introducing a traceability solution, and whether the business case is realistically viable for your organisation.All things being equal and we agree to go ahead, we’ll complete the Assess with an outline description of the recommended pilot, a wireframe of the product and a program for testing.We work using agile methodologies adopting regular sprints and retrospectives throughout the delivery process.The illustration below is an example where one of our business consultants conducted an Assess study on a handline fisheries value chain in the Philippines. The key activities throughout the “bait to plate” value chain were researched and surveyed, with consideration given to the main actors, tasks, data collected and disseminated, and legal compliance.Handline Fishery Value ChainOnce the Assess phase has been completed, we move into the Testing phase, which is followed by the Embedding phase. Testing can last anywhere from 8 to 12 weeks depending on the complexity of the app needed. We would always recommend keeping the software solution to a Minimum Viable Product to keep things cost-effective during this period of testing and evaluation. Once some tangible results are retrieved from the Pilot, then the software solution can be improved and optimised, ready for full roll-out in the Embed phase.We call the final phase Embed, because there’s far more to implementing a good technology solution than simply handing it over. Sometimes changes to processes are needed, training for staff, integration with other technologies etc. Whatever it is, we’ll try to work alongside you throughout the process, until you’re satisfied that the solution is suitably embedded into your organisation and supply chain.How we workIf you’re interested in undertaking an Assess for your organisation, please contact us at TX.company/contact.Originally published at blog.streamr.network on August 19, 2020.Blockchain won’t solve your traceability issues if you’re not capturing accurate data — the TX… was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 08. 19

Decentralization may fix th...

Can’t Touch ThisWarren Buffett was once asked why he owned a 5% stake in The Walt Disney Company. In his reply, Buffet drew a comparison between Mickey Mouse and the human actors that are on the payroll of most movie studios: “It is simple, the Mouse has no agent”.Buffet reasoned that once you have created the first unit of an intangible asset, every unit after that costs next to nothing to produce. As you draw Mickey Mouse, you generate an infinitely scalable asset, a feature unshared by human actors. Thus, while other competitors in the entertainment industry had comparable revenues, their profits were thinner due to the costs associated with paying the cast, the agents, and directors.Software is the same thing; it may be expensive to develop, but to multiply subsequent lines of code is a copy and paste process. If you were to sell computer hardware, on the other hand, every additional unit would require extra materials and labour.The scalability feature of intangible assets, such as intellectual property and software, unlocks exponential growth and global reach because it allows organisations to escape the positive relationship between output and total cost of production.Unfortunately, or luckily, we cannot softwarise everything. So how would you reach global presence in an industry that requires hardware to run operations, like transport or hospitality?Here’s the pitch:Because tangible assets are not quickly and cheaply scalable, I am not going to invest in them. My workers will.Be Less the new More (For More we cannot afford)The rise of the sharing economy can be attributed to the minimalist movement, and the rise of the minimalist movement can be attributed to millennials being broke.Sharing economy firms have been successful in creating a better experience for the consumer: from an easier, faster booking experience to review systems that incentivise quality maximisation. Nevertheless, the success of these companies is largely due to more competitive pricing. But cost reduction hasn’t been achieved through the joint or alternating use of a resource that would be sitting idle or underexploited. Costs are merely transferred to the gig workers.Take ride-sharing apps. These companies take roughly one-third of ride fares for providing a booking platform, while the driver has to shoulder the car purchase/lease, maintenance, gas, washes, insurance, social security expenses and labour risk — which, in times of COVID, is no small factor. As political economist Robert Reich puts it:“The big money goes to the corporations that own the software. The scraps go to the on-demand workers.”Make it ‘till you Fake itSince software is very easy to scale, it is also to replicate. Intangible assets tend to create spillovers, and firms realised that to spear away competition they should become as big as possible as quickly as possible. While this strategy might work, coming up with a unit economics that holds true only in monopolistic conditions is not going to be labelled as savvy business management. But these are strange times, and investors are willing to keep unprofitable companies on life support high on a cocktail of loss aversion, excess capital and a perverted love for the founders.As these firms reach the juggernaut status without any trace of profitability, the narrative keeps its pendulous motion between “We are good because we bring more wage-earning opportunities to more people” and “we expect to be profitable within one year” to maintain the pretence of a functioning business model. In this fabrication, the winners are the executives, with their obscene compensation that can be solely justified by fundamental attribution errors (FAE), and the investors who passed the ticking bomb to a greater fool. Unsurprisingly, the parties that are better off are the unproductives.Tech, tech-enabled and tick-tacksThe sharing economy’s value proposition is nothing newfangled. A p2p network system that connects offer with demand is intuitively a good idea. In an analogue fashion, we have been doing that for the past 8000 years. Without much originality, the business model was executed through a U-form corporate structure, which implies central authority and coordination, as well as data processing.Example of U-form business structureThe centralized structure provides an attractive benefit in enabling the fulfilment of the company’s vision through a clear chain of command. Innovation, for instance, is an exercise that requires some degree of centralization. In her book Quiet, Susan Cain argues that brainstorming with a large group of individuals tends to have a levelling effect on creativity because the initial goal of conceiving great ideas quickly turns into reaching a consensus among participants, thus diluting the quality of the concepts.In other words, if you are in the business of doing things differently you shouldn’t give too much weight to the opinion of the crowd.“If I had asked people what they wanted, they would have said faster horses.” (Attributed to Henry Ford)Sharing economy firms brought some change, but their type of innovation is more of a one-off. These firms are not in the business of continuous innovation in the way pure tech enterprises are. Still, they like to market themselves as such because on Wall Street “The new thing” never goes out of fashion ( adding a premium to a stock price is, sometimes, as easy as adding “Technologies” or “Blockchain” to the name of the organisation ). Yet under the surface, these firms are tech-enabled service companies. At the core, they are exchange systems. And exchange systems work better without a central processing unit.While distributing creativity is a bad idea, distributing big data processing is a good one. Because the selection of experts or expert committees happens in a deterministic manner, their analysis may be more subject to political forces and other biases. Further, the greater headcount gives crowds a unique advantage when it comes to finding equilibrium points.History has plenty of examples of data processing gone wrong because of centralization. To take a relatively recent one: when bread prices are set by a nepotistically selected group of government officials, people line up for the crumbs. When market forces are free to set prices, bread lines up for people.The NetworkTo the on-demand worker, software in the sharing economy equals the latifundia in Medieval Europe: access to it is subject to the reverence of the holy software-owner and the acceptance to have no voice and little to no rights.A fairer system may be a leaner system, scrubbed from the old business mould. Regrettably, in the early twenty-first century, the corporate apparatus comes with the software just like the medieval landowner comes with the field. While the development of the end-user application can be left to anyone as long as the right incentives are in place, engineering a global back-end infrastructure for data transfer is a way more complex task.To substitute an enterprise messaging system you would need a universal, secure, robust, neutral, accessible and permissionless real-time data network. This should provide on-demand scalability and minimal up-front investment. There should be no vendor lock-in, no proprietary code and no need to trust a third party with the data flowing through the network. This system should also integrate smart contracts, which are self-executing and have the terms of the agreement written into lines of code. Taking again the example of ride-sharing: GPS data points may funnel down to the smart contract to assess whether the ride has been completed according to the path set forth when the service was booked, thus releasing the ride fees to the driver only if the contractual obligations are met.If you think this is a lot to ask, the Streamr Network ticks all the boxes.It should be said that the Network’s full decentralization will be achieved in the next stage of development. Nevertheless, as of today, it is up and running like a charm.2.0The new organisation requires pioneering rules to be coded in. Among the many: the logic by which the smart contract rewards the operators, how ownership is distributed and diluted, which assets claim ownership and how governance takes place. These questions have no right answer and require some rational extravagance in exploring these uncharted territories. Research in the area, albeit limited, is growing steadily and is attracting an increasing number of professionals and devotees from all disciplines.Idea for a Decentralized Sharing Economy FirmAs regards exchange systems, the argument that a centralized solution is always better is a weak one. While the execution risk in decentralizing the sharing economy is quite high, decentralization is a much stronger proposition because on top of disintegrating agency costs it enables its working community to own the business as they earn their wages.Asset ownership is key to the effort of reducing the inequality gap. The money printer will not stop going Brrr and the currency in your pocket is still just colourised paper. What this means is that if your only source of income is your salary, then there is some probability that you will only get poorer.If monetary economics is not your cup of tea, I invite you to check the chart of the S&P 500 or any Real Estate Index from 1971, then check the trend of real wages starting from the same period.Wrapping UpThe new value models that are emerging from the convergence of the economic market system with information technology may terminate the traditional divide between the owners of capital and the workers. The Sharing Economy, essentially a collection of exchange systems, could be among the first to evolve from the old Industrial Value Model to a Distributed Value Model, where the right mix of technology and humanity can unleash greater economic potential.Decentralization may fix the Sharing Problem of the Sharing Economy was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 08. 13

You have the Right to a Dat...

Imagine having a personal diary where you write about your life. Now, imagine owning that diary and not being able to enforce your right to possession. It’s yours, and yet you can’t have it.How did this happen?The only Free Cheese is in The MousetrapImagine further that, when you shopped for the notebook, the stationer said you could have it for free. He then mumbled something about the possibility of recording, accessing and using your writing for anything he liked.Because your persona belongs to the underappreciated world of intangible assets, you couldn’t grasp what was at stake there. Au contraire, the stationer, let’s call him Mark — you know where I am heading with this — eagerly recognised that you had handed him your psychological profile and sentiment live update. You entered a transaction with highly asymmetric possession of information. Now the stationery store is selling copies of your diary to any willing buyer interested in reading it and knowing your inner thoughts, without your knowledge.Want to read this story later? Save it in Journal.And for Mark, it’s raining billions.You, just like me, got grifted by the folks selling free blank canvas for your thoughts, under a “Make the world a better place!” neon-sign. We believed them because they looked so relatable in their shabby sweaters. These are the good guys, we thought, the ones that say: “stay weird,” “do what you love,” and “don’t be evil.” After all, they are not THE BANKERS, those villainous souls living in greed-cladded skyscrapers, relentlessly concocting world domination.Then came Theranos, Uber’s self-driving cars, WeWork, Cambridge Analytica, the #DeleteFacebook movement, and suddenly we realised that the grass in Start-up Valley was greener only because it had been fertilised with bullshit.Mark’s SerendipityWe hectically scroll down the privacy policy form and feel relieved at the sight of the accept button. As cookie policies spook the internet, our adaptive unconscious is developing an automatic “Find & Click” response. In a mechanistic manner, we are removing decision points, leaving the cockpit of our digital life in the hands of our automatic behaviour patterns.Terms and conditions constantly change because the way personal data is being harvested is constantly changing. In 2004, a young man creates a website rating girls by their appearance. Fast forward a few years and he’s holding the greatest heap of personal information ever assembled by humanity. That happened fast, but not overnight.Most internet services chose to be free to align with the internet value of Universal Access. Asking for money, apart from not being cool, means also fencing people out of your digital backyard. Unfortunately, being cool doesn’t help to keep the light on, and ads started frothing here and there.Advertising went from a gut-based discipline to scientific method entailing constant theorising and testing. The goal is no longer to conjecture, while chewing a cigar, what your customer may like. In the brave new world, the objective is to create an ever-increasing framework of understanding around your target audience that helps you reinforce the message in a recursive fashion. For that, you need the help of a sexy woman called A.I. along with her favourite food: data. A lot of it.Source: Rhett Allain, WIREDThe internet is the barn saving algorithms from famine. Everything we do online leaves a trace. The data we produce defines, in part, who we are, what we like, with whom we exchange information, what and who we care about. Aside from Time, Body and Mind, your identity is arguably the most valuable asset you own.The ruling economic thinking of this age says, in Milton Friedman’s words, that “No-one takes care of somebody else’s property as wisely as he takes care of his own.” Why then, is a society so hot about private property fine with giving away the most personal of properties?Because we had no choice. Today, the last bastion protecting the exploitation of personal data stands upon the belief that there are no alternatives to centralized data retailing.We are starting to see some cracks.Break Free From The Mousetrap and They Will FollowThe General Data Protection Regulation (GDPR) defines the right to Data Portability as:“the right to receive the personal data provided to a controller […] It also gives the right to request that a controller transmits this data directly to another controller”Which is Latin for: what you do on social media, what you search on your browser, what you listen to on your phone, essentially all the data you produce using a service or a device, is yours. And just like a paper diary, you can take it wherever you want, even to a marketplace. The question is how to turn a piece of legislation like the GDPR into actionable rights.In 2000, Chris Downs collected his on- and offline data and sold it on eBay for £150. It took him a few months and 800 pages to put everything together.Chris’ printed dataYou can see how this is not scalable. Further, on its own, our data does not hold much value, but when combined, it aggregates into an attractive product for buyers to extract insights. This is the idea underpinning Data Unions.A Data Union is a framework, currently being built by Streamr, that allows people to easily bundle and sell their real-time data and earn revenue.Here’s how it works: a developer creates an application that collects data from a multitude of data producers. What kind of data is open to imagination: Swash collects browser searches, MyDiem gathers phone’s apps usage, Tracey records fishing data.MyDiem’s user interface under developmentThe anonymised data bundle travels on the Streamr decentralized and cryptographically secure p2p Network and gets sold on the Streamr Marketplace. The profits, shy of the developer/admin fee, are distributed among the Data Union participants through a smart contract.Data Unions model from StreamrThe GDPR and Data Unions give you back choice, control and empowerment, but they are not going to take the Data Cartel out of the picture all at once. Even if you install the Swash plugin, Google will continue to monetise your searches.Nevertheless, when you decide to own and sell your data, you are remoulding a monopoly into a polyopoly. Which is Greek for a market situation where there are many sellers and many buyers. Competitive forces in this market typology are highly effective in suppressing control centres and providing better transparency and inclusion. According to a paper from the United Nations, “Facebook or Google would lose monopoly powers if the data they collect were available to all interested parties at the same time.”As long as we are human, we will be vulnerable to persuasion, thus no technology will give us immunity from practices designed to engineer consent. Advertising is not intrinsically evil, yet the Cambridge Analytica scandal showed that there is a veil under which the most treacherous of these practices tend to flourish. The Data Union promises to lift that veil by giving you your share of the profits and by opening the market to a symposium of sellers and buyers, where nobody holds control over the other. If the market stops providing monopolistic benefits, the only way for the data exploiters to survive will be to stop the exploitation and comply with the new rules of inclusion.The Data HypenormalisationIn the Soviet Union of the 70s and 80s, everyone witnessed the crumbling of the system, but no one could imagine or dare to propose an alternative to the status quo. Everyone played along, maintaining the pretense of a functioning society. The wintriness of a mock-up system was accepted as the new reality. This effect was termed “Hypernormalisation” by the anthropologist Alexei Yurchak.From the BBC Documentary “Hypernormalisation” by Adam CurtisIn today’s Data Economy we are walking a thin line between a global, secure, accessible, neutral network of information and a dystopia where, without free-floating market prices, a few firms are capturing all surplus values created by data — your data — and they are amassing unprecedented wealth. Today’s Data Economy is not a free market. With few specialised firms anchored to their data-product niche, the system is highly uneven, featuring asymmetric bargaining powers. It is a system designed to benefit the few at the expense of the many.Ignoring the abuse that is being perpetrated on your digital property is like hurrying down a steep staircase with your hands in your pockets.You may survive, but you are not going to win a medal for smartest guy in the room.If you are interested, I encourage you to check the latest news from Streamr📝 Save this story in Journal.👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.You have the Right to a Data Income was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 07. 21

Dev update, June 2020

This is the Streamr project dev update for June 2020, welcome! Here are the highlights of the month:Data Unions framework now available in public beta!Finished a proof-of-concept for a next-generation DU architectureNetwork whitepaper being finalised, should be out in late July/early AugustCompleted Phase 1 of token economics research with BlockScienceEnd-to-end encryption with automatic key exchange now 80% readyData Unions in public betaThe Data Unions framework, which was in private beta since October last year, is now publicly available to everyone. So what’s new?There are new docs, detailing the steps to create Data Unions and integrate apps to the framework.The “Create a Product” wizard on the Marketplace now includes the option to create a Data Union, instead of a regular product.For Data Union products, the product view now shows various stats about members, earnings, and Data Union parameters.DU products also expose an admin section, where DU creators can manage members, app keys, and such.The JS SDK ships with easy accessors to the DU-related methods, making integration a breeze for JS-based platforms. The Java SDK will get support before the official launch, and even if you’re working on a platform with no official SDK just yet, integration to the raw API isn’t hugely complicated, although it does require some effort (and we’re happy to help).The beta is feature-complete in terms of the fundamentals. The purpose of the beta is to expose any remaining issues before the framework is officially launched after the summer. We’ll be expanding the SDK support as we go, as well as creating other useful tooling for Data Union admins, such as scripts to kick out inactive members and to implement custom join procedures (a Data Union might want to include a captcha to prevent bots from joining, etc.).Upcoming Data Unions architectureWe have already started working on the first major post-release upgrade to the Data Unions framework. In the previous update, I mentioned we’re working on a proof-of-concept, and this task has now been completed.We’re calling this upgrade Data Unions 2.0, communicating a major version bump with an improved architecture. In contrast to the current architecture built around Monoplasma and its Operator/Validator model for scalability and security, Data Unions 2.0 will feature an Ethereum sidechain to contain the Data Union state fully on-chain, with the POA TokenBridge (with AMB) connecting the sidechain to mainnet.All the current Data Unions will be upgradable to the new infrastructure once it’s ready later this year. While the proof-of-concept has been completed and we’re now committed to this approach for the next upgrade, there is still plenty of work to be done. A blog post detailing the upgrade will be posted in due courseNetwork whitepaperThe whitepaper, detailing the Corea milestone of the Streamr Network, is almost ready. The experiments are complete, and we’re working on the text to accompany the results. In June, the work snowballed slightly, as we realised we need to prove the randomness of the generated network topologies in order to relate to some earlier literature, but that hurdle has thankfully now been crossed. If no further obstacles are encountered, the paper should be ready by the end of July or early August.Phase 1 with BlockScience completedWe’ve reached the end of Phase 1 in the token economics research project with BlockScience. The Phase 1 deliverable was a document containing mathematical formulations of the objects and rules in the Streamr Network.In Phase 2, we’ll start the actual modeling process, in which the first, simple simulations of the Streamr Network token economics are built using the cadCAD framework. In future phases, the models will be further refined and iterated, and those models will inform our decisions about the future incentive model.End-to-end encryption with key exchangeStreamr has had protocol-level support for end-to-end encryption for a long time. It’s also been implemented in the SDKs as a pre-shared key variant. This is a simple implementation that relies on each party’s ability to communicate secrets outside the system, over another secure channel. The downside of the pre-shared key approach is that the publishing and subscribing parties need to know and contact each other in advance before they exchange encrypted data.We’ve recently been working on a key exchange mechanism that happens directly on the Streamr Network to securely communicate the keys to the correct parties. This makes end-to-end encryption effortless and automatic for all parties involved. This is very important, because end-to-end encryption is obviously a requirement for decentralization; nodes in the network will generally be untrusted. And usability shouldn’t be sacrificed for security — the automatic key exchange achieves both.Looking forwardJuly and August will be the epicentre of the team’s annual holidays, and we’ll be producing only one dev update over this period, due in the second half of August. However, over the next couple of months you can also look forward to dedicated posts about the Network whitepaper and the Data Unions 2.0 architecture.A summary of the main development efforts in June is below, as well as a list of upcoming deprecations that developers building on Streamr should be aware of. As always, feel free to chat with us about Streamr in the official Telegram group or the community-run dev forum.NetworkWhitepaper making slow but steady progress, should be ready in late July/early AugustEncryption key exchange 80% ready in both JS and Java SDKsDiscovered an issue where a tracker gives a node more peers than it should, working on a fixWebRTC issues still being investigated, opening an issue with the library developersStorage refactor in PR, working on data migration toolToken economics research Phase 1 completedSupport for old ControlLayer protocol v0 and MessageLayer v28 and v29 dropped, as previously communicated in the breaking changes sectionData UnionsData Unions framework launched into public betaAlerts & system monitoring improvements to detect problemsData Unions 2.0 proof-of-concept successfully completedCore app, Marketplace, WebsiteTerms of use, contact details, and social media links added to Marketplace productsWorking on a website update containing updates to the top page, a dedicated Data Unions page, and a Papers page to collect the whitepaper-like materials the project has published.Stream page now shows code snippets for easy integrationDeprecations and breaking changesThis section summarises deprecated features and upcoming breaking changes. Items marked ‘Date TBD’ are known to happen in the medium term, but a date has not been set yet.(Date TBD): Support for API keys will be dropped. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets. Applications integrating to the API should authenticate with the Ethereum key-based challenge-response protocol instead. Instructions for upgrading from API keys to Ethereum keys will be posted well in advance of dropping the support for API keys.(Date TBD): Support for unsigned data will be dropped. Unsigned data on the Network is not compatible with the goal of decentralization, because malicious nodes can tamper data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased as part of the progress towards that milestone. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above).Originally published at https://blog.streamr.network on July 14, 2020.Dev update, June 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 07. 15

You shouldn’t sell your kid...

You can’t sell your kidneys! Responding to objections to data ownershipIn May, I published a lengthy essay on why, for ordinary individuals, privacy was dead and how a framework of data ownership would provide not just more privacy, but also much more data sharing, economic equality and dignity for the billions of people who use the internet.The essay sparked a fairly passionate response on social media from those who advocate for a privacy-centric world. I had anticipated more than a bit of blow-back, especially because despite the first essay’s length, many points about the ownership model remained unanswered. That is my failing, which I hoped to rectify with this second essay. I’ve tried to whittle down the objections to seven points. No doubt there are more but these seven seem to be the most crucial to answer.1. Data monetization will hinder the open data economy.The argument here is that if the goal is to ensure data is shared most widely, for everyone’s benefit, then it needs to be free. As soon as you put price tags on data, then it will inject “ enormous friction into free flow of information.”At first glance, this sounds like it should be true. Paying for stuff is a friction — not paying is frictionless. But this misses a bigger economic insight. Apply the same argument to bread. If we say bread must be free for all to utilise, and the state must ensure all bread producers make their bread free for all to utilise, (which is what Open data campaigners are ultimately asking for with data) then far fewer people would have bread (that’s a pretty big friction!). Why is this so? Simply because there would be no incentive to produce bread. People may argue that data is an effective side-product of other activity. But that is far from clear. In fact, as Streamr’s sister company and WWF are already discovering, incentivising the production of data turns out to create very original and necessary products.At its most fundamental level, money is actually a communication tool. Removing money from data means there is no common protocol for sorting good and bad products. Money allows us to say, “my toaster is worth 162 of those apples, 12 pairs of socks and 73 ballpoint pens” all at the same time. A well-priced market for data will therefore sort the good from the bad and end the under-the-table economy which currently exists for user-generated data. By putting a price on data, you should actually see more of it being exchanged and distributed.But what about those data sets that should remain free because there is a social good involved? Introducing money devalues social giving? Well, why don’t we leave it to ordinary people (who create that data) to decide whether they want to share what they own freely or not? By insisting that data should not have a price, those who want Open data are effectively insisting that money should be replaced with laws to enforce its distribution. It is a busted model at best. And for anyone with libertarian instincts, a dangerous one at worst.2. Trading data will kill privacy further.The argument often made about devaluing privacy by trading it, is about commodifying a right. It’s about what goes on in people’s minds. To put it bluntly, if you turn data into property and give people monetary incentive to sell, then really you’re bribing them to forgo their privacy.In the original essay I argue that people with ownership rights over their data will have far more legal and enforcement leverage to obtain whatever outcome they desire: a vast improvement over the current scenario where people are forced to beg FAANG or their governments for just one outcome — privacy. Those points in and of themselves should answer this critique because in the round, with data ownership, people will have more choice over what happens to their data. But there are several other retorts to deploy here that answer the bribery point more directly.Firstly, people are likely to imbue data with more worth, not less, if they own it. This is a well-studied behavioral economics phenomenon termed the endowment effect. The phenomenon in aggregate could be far, far larger on people’s mindsets than anything privacy campaigners could muster in terms of public education.Secondly, monetisation allows people to better judge precisely what they are forgoing in terms of their privacy. Not every piece of information I generate is equally precious in terms of the integrity of my identity in the public sphere. I care when others compile lists of who I emailed or texted today. I don’t care so much when it comes to revealing what songs I listened to (though I would of course care if that data set can be cross-referenced so as to reveal the first).Currently privacy Puritans ask people to get involved in deeply technical or political fights with both governments and companies in order to resist all intrusions. That’s the only weapon of resistance they can offer, and for most people it is a near impossible drain on their time and abilities. And it’s this impossible ask, which devalues their privacy more than anything: because it is too difficult to protect what is precious, people end up giving up on all of it and their privacy becomes entirely worthless by default.So why not put a value on it, and ask people to figure out those decisions for themselves? I’d bet that if an advertising agency or a hedge fund asked to pay $20 to listen in to people’s conversations each month, the vast majority would give it some hard thinking before saying yes or no. Because people are so powerless to begin with, they barely think about it at all. Putting a price on privacy helps people determine what to them is valuable and what is not. Given where we are at the moment, that’ll very likely mean a lot more information will remain private or priced so high (in aggregate) that said information no longer makes commercial sense to purchase.3. Turning rights into commodities harms the poor the most.But what about the poor? Those people for whom $20 from an advertising agency is a week’s wage? Won’t they be turned into data producing machines, each click generating more money for them but vastly more for the companies utilising the data? Won’t this set-up reinforce existing inequalities rather than mitigating them? What if people are tempted into selling all the rights to their genetic code? If you’re not careful, the warning goes, this becomes analogous to setting up a market in body parts where the poor are enticed to sell their kidneys. This dystopian vision is vividly laid out by Valentina Pavel here.To sincerely believe that these nightmare scenarios will come true, you have to take a few deft mental leaps and reduce your model of ownership to the most simplistic notion of property that exists. I own this lumber. I sell it to you. You now own it and I have no claim. End of story.But of course, property transfers encompass a far broader spectrum of models. There’s a reason why it makes up nine-tenths of the law. When transferring data as property, Data Unions, who act as mediators of people’s data, will likely adopt leasing rights more akin to authorship rights than simplistic property rights. The academic Maria Savona has begun to argue this out. Leasing is of course only slightly more complicated as an ownership structure, but it means that professional bodies (data buyers and Data Union administrators) can come to terms with how property is utilised and in what way. This happens in the real world all the time, every single day. To argue that it can’t happen with data (it already does), really is wilful blindness.And yes, hands up, we’re going to need legislation to stop unscrupulous players, and to establish healthy relations between a union’s managers and its owners. Excitingly, this is something that is already being worked on by RadicalXChange and is also being discussed by the European Commission.And maybe, too, rather like the housing market, the sale of such property will be regulated to the point where individuals will find it difficult to simply sell off their personal data without employing an agent (like a Data Union) to act on their behalf.But there is a second element to this counter-argument to data ownership which deserves teasing out. Usually these arguments come from those who model society as an interaction between three parties: the state, atomised individuals and big tech. But this is a desperately hollowed view of what society actually is. And it’s one which way too easily forgets what civil society actors like labour unions, mutual savings and loans banks and credit unions are doing for the position of the poor. By collectivising interests, those institutions improved, not further immiserated, society’s most disadvantaged people. Why wouldn’t they act in the same fashion for the poorest when it comes to the data economy? In our nearly realised world of Data Unions, the brokering of terms of sale does not take place between an individual and a tech giant. That world would indeed be a rapacious one for the individual to navigate. Instead these sales take place through a mediator, Data Union professionals (like Swash) who represent the interests of individual members when coming to terms with data buyers around the globe. These are therefore transactions between parties on a far more equal footing.The suggestion you’d be selling your kidneys is not hyperbole. This is the argument from the EC’s own specialist body, The European Data Protection Supervisor, on the matter:There might well be a market for personal data, just like there is, tragically, a market for live human organs, but that does not mean that we can or should give that market the blessing of legislation.Then as the 2017 report’s next line goes on:One cannot monetise and subject a fundamental right to a simple commercial transaction, even if it is the individual concerned by the data who is a party to the transaction.This last sentence really grates. It belies a real arrogance borne of a desperately paternalistic attitude. Why shouldn’t people have a say in matters that directly affect them. Even more so when they are born of their labour? And it grates even more so given that it is our paternalistic legislator who has been doing all the failing when it comes to protecting privacy. Because all this is being said in an economy in which thousands of companies are already owning and trading our data with each other.4. But privacy tools are just getting warmed up!In my essay, it’s very clear that I did not give due heed to the new privacy tech that people will already be able to use, such as Zero knowledge proofs or completely trustless decentralized systems, software that will bolster the privacy cause immeasurably by making privacy easier for individuals to control. And what about the extra money that has poured into the privacy tech space (largely during the crypto boom of 2017) that is yet to bear developmental fruit? (Don’t forget that crypto is short for cryptography — one of the most central privacy enhancing technologies).A quick rejoinder is this: these are just tools that can also be deployed in a framework of data ownership as well as within a privacy setting. Privacy tools needn’t only be employed within a privacy-centric world view. Rather like putting up blinds for my house — I can both own my data, and encrypt it. They aren’t mutually exclusive. It’s the overall legal/ethical/economic framework that’s most important to get right. The privacy framework still suffers from the critiques made in the original essay, which don’t negate the fact of extra (and more technically complicated) tooling.5. You can’t claim ownership over data — it’s too complicated/ interconnected.Because data is so interlinked between people, how will it be possible for a single individual to own it? Glen Weyl says this: “ My mother’s (date of birth) is also my (mother’s date of birth).” This is of course true. There are hundreds of examples like this. Photos that contain more than the image of yourself. A home address where more than one person lives. How can any individual claim data points like these when the underlying information they communicate has a value generation lineage which could be claimed by so many others, too?It’s a powerful argument but the flaw perhaps is this: it’s almost entirely hypothetical. In the world of actual data sales, useful saleable data generated by individuals isn’t made up of individual unconnected data points. The theoretic doesn’t correspond to reality. Firstly, no one actually wants to buy one birthday. So argue all you want, but the underlying property is valueless (and I conceded that plenty of people have in fact argued over the rights to own near valueless items for the sake of principle).And even a bunch of birthdays is actually just that. Without names attached it’s just a bunch of random dates. Literally anyone could generate that information. In fact even birthdays and full names don’t provide much in the way of saleable data. What sells, what has value to others, are multiple data points from individuals that are linked (usually in chronological fashion). Because those linked data points provide useful information about the world.If we take that as the premise, then linking those data points is the work done in creating the output. If you start linking data points that pertain to you (even though some of them might interconnect to others) you’ll quickly create a data stream that is unique to you as an individual.If that is starting to sound overly complicated, swap the word data for story and you get a better intuitive sense of what is meant. As an author, I can’t own a given word (or data point) in my book. My rights to assert ownership derive from the fact I’ve worked to put a significant number of those words together to form something entirely unique (data stream). Sometimes that can be as short as a haiku. Other times it is War and Peace.Pointing out that data sets are made up of individual data points that can’t be owned because they are common to others is correct. No one can dispute that. But it is akin to pointing at hundreds of pages of Wolf Hall then asserting that Hilary Mantel has no right to intellectual property over those works because she can’t own any individual word because other people use those words.And of course many of these legal arguments about what can and can’t be owned, and in what ways, whether it’s literary, photographic or otherwise, have already been settled (are the ownership rights over data from a Facebook group really any more complicated than a multi-member rock band writing and recording a #1 hit?). Over the centuries, legal precedents have been set. So whilst this might seem complicated within the context of data, for those navigating books, films, or music, those precedents are relatively easily navigated today.There are many synergies here between the established world of creative IP and the up-and-coming world of data ownership. There is plenty of case law already available to inform the numerous disputes that will inevitably arise once data is further instantiated as a new form of property. And that’s okay. Because those disputes, once resolved, will, like with other forms of intangible property ownership, eventually allow for easier navigation and ultimately much better outcomes.6. What about indirect data?So it is that not all data that is generated by the individual is solely about just that individual (interpersonal data), not all data about an individual is generated by that individual (indirect data). How do data ownership and monetisation solve these issues? For now, I’m not sure they do.When it comes to indirect data, I for one believe in the utility of people to collect information on society. Otherwise we might as well close all sociology departments now. The problems come when CCTV cameras (or street lights) can track your every movement or employers own employee work product, or you find yourself in ten years’ time, living in what we benignly call a smart city. A data ownership model doesn’t have a direct answer to this, which still means there’s plenty of room for privacy laws to regulate this sphere of data collection.7. Individuals won’t get enough money to make this worthwhile.This is an argument formulated by people who’ve likely never entered the business of selling personal data (granted: few people have). Sure, data from one app might be worth very little when divided amongst all users, but combine my credit card data with my Netflix, Spotify, Google, Amazon Alexa, Twitter and LinkedIn data, and that’s likely worth hundreds of dollars every year. If these critics had sold their data, they’d know how much user-generated data is worth in those under-the-table markets that already operate secretly every day. And of course not every Data Union will need every single person to join for its data to be valuable. Both now, and in the future, Data Unions just need a sample size of the whole to deliver reliable information to buyers. The point about the future is important because, as Lanier says, the pie will grow.“The point of a market is not just to distribute a finite pie, but to grow the pie. Those who dismiss the value of what people do online have forgotten this most basic benefit of open markets.”Originally published at https://blog.streamr.network on July 2, 2020.You shouldn’t sell your kidneys! Responding to objections to data ownership was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 07. 02

News: the Data Union public...

As of today, you can create and deploy a Data Union using the tooling available in Streamr Core. The Data Union framework, now released in public beta, is an implementation of data crowdselling. By integrating into the framework, app developers can empower their users to monetise the real-time data they create. Data from participating users is sent to a stream on the Streamr Network, and access to the pooled data is sold as a product on the Streamr Marketplace. Any revenue from the data product is automatically shared among the Data Union members and distributed as DATA tokens.Streamr launched the Data Union framework into private beta in October last year, with the Swash app at Mozfest in London. Swash is the world’s first Data Union, a browser extension that allows individual users to monetise their browsing habits. With this public beta launch, we hope to spark the development of even more Data Unions.What’s new in the public beta release?If you’ve used Streamr Core before, you might already be familiar with creating products on the Marketplace. With the introduction of the Data Union framework, the ‘Create a Product’ flow now presents two options: create a regular Data Product, or create a Data Union.Data Unions are quite similar to a regular data product — they have a name, description, a set of streams that belong to the product, and so on. However, there is one important difference; the beneficiary address that receives the tokens from purchases is not the product owner’s wallet — instead it is a smart contract that acts as an entry point to the revenue sharing.The Marketplace user interface guides the user through the process of creating a Data Union and deploying the related smart contract. The Data Union can function even while the product is in a ‘draft’ state, meaning that app developers can test and grow their Data Unions in private, and only publish the products onto the Marketplace once a reasonable member count has been achieved. For the app developer/Product Owner, there are also new controls for: setting the Admin Fee percentage (a cut retained by the app developer/Product Owner), creating App Secrets to control who can automatically join your Data Union, and managing the members of your Data Union.For all published Data Unions, basic stats about the Data Union are displayed to potential buyers on the product’s page.An example Data Union Product overviewDeploying a Data UnionThe process of creating Data Unions and integrating apps with them is now described in the relevant section of the Docs library. Here’s the process in a nutshell:Make sure you have MetaMask installed, and choose the Ethereum account you want to use to admin the Data UnionAuthenticate to Streamr with that account (creates a new Streamr user), or connect that account to your existing profileCreate one or more streams you’ll collect the data intoGo to the Marketplace, click Create a Product flow, choose Data UnionFill in the information for the product and select the stream(s) you createdClick the Continue button to save the product and deploy the Data Union smart contract!Your empty Data Union has been created! Next, you’ll want to integrate the join process and data production into your data source app. The easiest way to accomplish those is to leverage the Javascript SDK, which already includes support for all the Data Union functions. In your app, you’ll want to:Generate and store a private key for the user locallyMake an API call to send a join request (include an App Secret to have it accepted automatically)Start publishing data into the stream(s) in the Data Union!Again, detailed integration instructions are available in the Docs.Data Unions present an opportunity for app developers to reward users for sharing their data, giving Data Union products a competitive advantageSo what’s next?The public beta is feature-complete in the sense that all the basic building blocks are now in place. Over the next couple of months, we’ll be addressing any loose ends, such as bringing the DU functionality to the Java SDK and adding tooling for Data Union admins to manage their member base.We’ll also be monitoring the system closely, in the hope that the public beta phase will help reveal any remaining issues. Please do expect to encounter some hiccups along the way — none of this has been done before! If all goes well during the public beta, we’re looking to officially launch Data Unions in Q3 this year. The launch will be accompanied by a marketing campaign and some changes to the website to highlight the new functionality.If you have an idea for a Data Union, take a look at the Docs to get started. The Streamr Community Fund is also here to offer financial support to the development of your project — you can apply here. We’re also happy to answer all your technical questions in the community-run developer forum and on Telegram.Originally published at blog.streamr.network on June 18, 2020.News: the Data Union public beta is now live was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 06. 18

Dev Update May 2020

Welcome to Streamr dev update for May 2020! Looking back at last month’s update, we’re happy to realise that many major development strands that were still seeking solutions only a month ago, have now found them and fallen nicely into place. Here’s a few hand-picked highlights from May:Solved all remaining problems blocking the upcoming Network whitepaperStarted testing the WebRTC flavour of the Network at scaleGot the Data Unions framework relatively stable in advance of entering public betaStarted planning a roadmap towards the next major Data Unions upgradeCompleted Phase 0 of token economics research with BlockScienceThe Network whitepaperFor over 9 months now, a few people in the Network team have been hard at work at documenting and benchmarking the Network at scale. The deliverable of that effort is an academic paper, intended for deep tech audiences, in both the enterprise and crypto spaces, as a source of detailed information and metrics about the Network.This blog post from September outlined the approaches and toolkit we were using to conduct the experiments, but the road to the goal turned out to be quite complicated. We’ve sort of learned to expect the unexpected, because pretty much everything we do is in uncharted territory, but trouble can still come in surprising shapes and sizes.We worked steadily on setting up a sophisticated distributed network benchmarking environment based on the CORE network emulator, only to ditch it several months later because it was introducing inaccuracies and artifacts into our experiments at larger network sizes of 1000 nodes or more. We then activated Plan B, which meant running the experiments in a real-world environment instead of the emulator.We chose 16 AWS data centres across the globe and ran between 1 and 128 nodes in each of them, creating Streamr Networks of 16–2048 nodes in size. The new approach was foolproof in the sense that the connections between nodes were real, actual internet connections, but running a large-scale distributed experiment across thousands of machines brought its own problems. I’ll give some examples here. First of all, it needed pretty sophisticated orchestration to be able to bring the whole thing up and tear it down in between experiments. Secondly, accurately measuring latencies required the clocks of each machine to be synchronised to sub-millisecond precision. Thirdly, the resulting logs needed to be collected from each machine and then assembled for analysis. None of these things were necessary in the earlier emulator approach, but the reward for the extra trouble was accurate, artifact-free results from real-world conditions, adding a lot of relevance and impact to the results.During May, we finally got each and every problem solved, and managed to eliminate all unexpected artifacts in the measured results. Right now we are finalising the text around the experiments and their results, and we are expecting the paper to become available on the project website in July.Network progress towards BrubeckWorking towards the next milestone, Brubeck, means making many important improvements. One of them is enabling nodes behind NATs to connect to each other, which will allow us to make each client application basically a node. This, in turn, helps achieve almost infinite scalability in the Network, because then clients will help propagate messages to other clients. The key to unlocking this is migrating from websocket connections to WebRTC connections between nodes. This work is now in advanced stages, although we are still observing some issues when there are large amounts of connections per machine. Having developed the scalability testing framework for the whitepaper comes in handy here; the correct functioning of the WebRTC flavour network can be validated by repeating the same experiments and checking that the results are, in line with the ones we got with the websocket edition.Another step towards the next milestone is making the tracker setup more elaborate. Trackers are utility nodes that help other nodes discover each other and form efficient and fair message broadcasting topologies. When the Corea milestone version launched, it supported only one tracker, statically configured in the nodes’ config files, making peer discovery in the Network a centralized single point of failure; if the tracker failed, message propagation in the Network would still function, but new nodes would have trouble joining, over time deteriorating the Network. Thanks to recent improvements, the nodes can now map the universe of streams to a set of trackers, which can be run by independent parties, allowing for decentralization. Trackers can now be discovered from a shared and secure source of truth, a smart contract on Ethereum mainnet, which in the future could be a token-curated registry (TCR) or a DAO-governed registry. The setup is somewhat analogous to the root DNS servers of the internet, governed by ICANN — only much more transparent and decentralized.Ongoing work also includes improving the storage facilities of the Network. Storage is implemented by nodes with storage capabilities. They basically store messages in assigned streams into a local Cassandra cluster and use the stored data to serve requests for old messages (resends). The current way we store data in Cassandra has been problematic when it comes to high-volume streams, leading to uneven distribution of data across the Cassandra cluster, which in turn leads to query timeouts and failing resends. In the improved storage schema, data will be more evenly distributed, and this kind of hotspot streams shouldn’t cause problems going forward. As a result, the Network will offer reliable and robust resends and queries for historical data.There’s also ongoing work to upgrade the encryption capabilities of the Network — or more specifically the SDKs. The protocol and Network have actually supported end-to-end encryption since the Corea release, but the official SDKs (JS and Java so far) only implement end-to-end encryption with a pre-shared key. The manual step of pre-sharing the encryption key limits the usefulness of the feature. The holy grail here is to add a key exchange mechanism, which enables publishing and subscribing parties to automatically exchange the decryption keys for a stream. This feature is now in advanced stages of implementation, and effortless encryption should become generally available during the summer months.Data Unions soon in public betaThe Data Unions framework is approaching a stable state. In the April update, we discussed some issues where the off-chain state of the DUs became corrupted, leading to lower than expected balances. All known issues were solved during May, and the system has been operating without apparent problems since then.The Data Unions framework has been in private beta since late last year, with a couple of indie teams ( Swash having made the most progress so far) building on top of it. During private beta, we’ve been working on stability, documentation, and frontend support for the framework. We’re now getting ready to push the DU framework into public beta, which means that everyone can soon start playing around with it. The goal of the public beta phase over the summer months is to get more developers hands-on with the framework, and to iron out remaining problems that might occur at larger-scale use (and abuse).We’ve also started planning the first major post-release upgrade to Data Unions architecture, which improves the robustness and usability of the framework. We are currently working on a proof of concept, and we’ll be talking more about the upgrade over the course of the summer.Phase 0 of token economics research completedAs was mentioned in one of the earlier updates, we started a collaboration with BlockScience to research and economically model the Streamr Network’s future token incentives. It’s a long road and we’ve only just started, but it’s worth sharing that in May we reached the end of Phase 0. This month-long phase was all about establishing a baseline: transferring information across teams, establishing a glossary, documenting the current Network state and future goal state, and writing down what we currently know as well as key open questions.The work continues on an ongoing basis with Phase 1, the goal of which is to define mathematical representations of the actors, actions, and rules in the Streamr Network. In future phases, the Network’s value flows will be simulated, based on this mathematical modeling, to test alternative models and their parameters and inform decisions that lead to incentive models sustainable at scale.Looking forwardBy the next monthly update, we should have Data Unions in public beta, and hopefully also the Network whitepaper released. Summer holidays will slow down the development efforts over July-August, but based on the previous summers, it shouldn’t prevent us from making good progress.To conclude this post, I’ll include a bullet-point summary of main development efforts in May, as well as a list of upcoming deprecations that developers building on Streamr should be aware of. As always, you’re welcome to chat about building with Streamr on the community-run dev forum or follow us on one of the Streamr social media channels.NetworkExperiments for the Network whitepaper have been completed. Finalising text content nowJava client connection handling issues solved. Everything running smoothly again, including canvasesThe Network now supports any number of trackersBrokers can now read a list of trackers from an Ethereum smart contract on startupWebRTC version of the Network is ready for testing at scaleToken economics research Phase 0 completedWorking on a new Cassandra schema and related data migration for storage nodes.Working on key exchange in JS and Java clients to enable easy end-to-end encryption of data.Data UnionsData Union developer docs are completeProblems causing state corruption were fixedStarted planning a major architectural upgrade to Data UnionsCore app, Marketplace, WebsiteStreamr resource permissions overhaul is doneBuyer whitelisting feature for Marketplace is doneWorking on adding terms of use, contact details, and social media links to Marketplace productsWorking on a website update containing updates to the top page, a dedicated Data Unions page, and a Papers page to collect the whitepaper-like materials the project has published.Deprecations and breaking changesThis section summarises deprecated features and upcoming breaking changes. Items with dates TBD are known already but will occur in the slightly longer term.(Date TBD): Authenticating with API keys will be deprecated. As part of our progress towards decentralisation, we will eventually end support for authenticating based on centralised secrets. Integrations to the API should authenticate with the Ethereum key-based challenge-response protocol instead, which is supported by the JS and Java libraries. At a later date (TBD), support for API keys will be dropped. Instructions for upgrading from API keys to Ethereum keys will be posted well in advance.(Date TBD): Publishing unsigned data will be deprecated. Unsigned data on the Network is not compatible with the goal of decentralization, because untrusted nodes could easily tamper data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased prior to reaching that. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above), which enables data signing by default.Originally published at blog.streamr.network on June 16, 2020.Dev Update May 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 06. 16

Streamr project update, Jun...

The global moment we find ourselves in today is unlike any other. It has thrown up uncertainty and confusion, yet at the same time it has shed light on something that has always been central to the Streamr vision — the enduring strength and importance of community; the ways in which our connections are what strengthen us in good times and bad.The Streamr community is not only invaluable to the growth of the ecosystem and the ultimate success of the Streamr project, it energises the team with ideas, challenges and discourse on a daily basis. So this felt like a good time to check in with you all, to review everything we have accomplished since launch, with your support. I’ll also be running an AMA on the 11th of June at 15:00 CEST, but this article can give an overview ahead of that, and perhaps prompt some questions for discussion. Let’s begin by reminding ourselves of the vision.The Streamr VisionStreamr was founded with the goal of building a real-time data infrastructure for future data economies. Ideally, all data streams in the world could be accessed via your nearest node, with participants incentivised to provide both content and delivery services on the system.The cornerstones of Streamr’s chosen approach are decentralization, peer-to-peer and blockchain. This is because, in our view, the only acceptable implementation of a future data infrastructure is one that is global, scalable, secure, robust, neutral, accessible and permissionless.Members of the Streamr team before the Network launch pier-to-pier boat party at DevCon5Decentralization ticks all the boxes (though it may not be the only solution — that remains to be seen). While a system built on fiat currencies and centralized technology could, if done skillfully, achieve sufficient user-facing functionality, it would always be heavily influenced by the commercial business goals of the commercial party operating it. The backbone of the global data economy should not serve someone’s business goals — it should serve everyone’s business goals.If the operation and governance of a system is distributed across many independent parties in different jurisdictions and geographies, with a diverse range of commercial interests, no individual party or set of parties can compromise it. And that’s when it becomes truly unstoppable.The Streamr vision is the foundation of what we do and why we do it. It is largely unchanged since launching the project in 2017, and continues to be the steady beacon that defines who we are.So what are we doing in 2020?In working towards the Streamr vision, the goal we have steadfastly pursued is delivering the roadmap laid out in the 2017 whitepaper. And we are well on our way to achieving that goal.Milestone 1 is complete, with the successful launch of the Streamr Marketplace in 2018, and the launches of Core and the Network last year. Since late last 2019, we’ve been working on Milestone 2, the main aim of which is to progress the Network towards token economics and decentralization. Much of the work in this Milestone focuses on removing technical obstacles for scalability and decentralization, and commencing research on token economics.In January of this year, Streamr ran an internal developer ‘Networkshop’ which addressed some of these obstacles head-on. Here, the Streamr dev team debated multiple development areas, generated new ideas and, above all, came away with the next steps to creating a network that is strong, secure and scalable. As we go forward, this process will involve but not be limited to: moving to a ‘clients as nodes’ model, ensuring network messages are signed and encrypted, and enhancing the systems by which we prevent network attacks.Members of the Streamr team at the Network GTM workshop in HelsinkiThe ‘Networkshop’ discussions around token economics were foundational — we defined the questions that we at Streamr need to answer in order to guide our token approach. Token economics are crucial because they are the mechanism by which the network captures the value created by user adoption. The mechanism incentivises people to participate in running the network, which enables decentralization, which enables the vision. We have recently begun research into token economics with BlockScience, and the project’s token economics will be designed during Milestone 2 and implemented in Milestone 3.Another big deliverable related to the Network is the scalability research for the current milestone. The goal of that research is twofold: to show how the bandwidth requirements for publishers stay constant regardless of the number of nodes, and to prove that the network has good and predictable latency, which grows logarithmically with the number of subscribing nodes. Both of these properties are very desirable for Streamr from a scalability point of view. In recent experiments, we observed the selected metrics in networks of up to 2048 nodes, running in real-world conditions, distributed to 16 different AWS locations globally.There were some major setbacks along the way. For example, we had to abandon the initially planned emulator approach because the emulator was adding severe artifacts to the measurements for large network sizes. Having to resort to actual real-world experiments made the process much slower and more expensive (spinning up thousands of virtual machines on AWS is not exactly cheap), but on the bright side, the results carry much more impact because they represent real-world performance. I have to say, the results turned out great and very competitive, even against the best centralized message brokers! The research will be published as a Network whitepaper very soon — the supporting text about the results is being finalised right now. The paper will be a crucial document for anyone thinking about utilising the Network for any business-critical or large-scale purpose.Data UnionsAnother big piece of work for 2020 is bringing the Data Union framework to market. The Data Union framework is our implementation of data crowdselling, a redistribution of data ownership which means that individuals can regain ownership over their personal data, rather than just the tech giants (who hoard and sell user data under the protection of T&Cs).Under the Data Union framework, users can pool their own data with that of other users, allowing them to increase their data’s value and then sell it in a Data Union, via the Streamr Marketplace.The functional flow of a Data UnionThe Data Union framework has been in private beta since late last year. One of the app teams with early access is Swash, which has seen steady organic adoption since it was first demoed at Mozfest last year. Users can also hide any data they prefer to keep private, thus returning choice and autonomy to the individual. This is a major step forward in our vision for a new data economy.And personal data ownership is something that people want, at least according to a research project that we ran in January. As our research partners at Wilsome stated: “Once we explained and demonstrated the concept of crowdselling and Data Unions, most people liked it, and some loved it.”Right now our focus is supporting the creation of more Data Union products like Swash, finalising developer documentation to support that, and fixing all bugs uncovered during the early-access phases. The full launch is taking place in autumn, with a marketing push to promote this disruptive new framework in the personal data monetisation space.We also started planning what the next big upgrade to the framework might look like; a ‘version 2.0’. In this new approach, the operator/validator model, Merkle proofs, and freeze period required by Monoplasma might be replaced with a side-chain plus inter-chain bridge to advocate a fully on-chain approach for better robustness and security, as well as fast withdrawals.Enterprise adoptionPartnerships have played a significant role in Streamr’s growth. Based on what we have learned over the last year of business development, we have made some changes to our enterprise partnerships tactics.2017–2018 saw a surge of excitement around paper partnerships in the space, but they were predominantly PR-led and rarely led to real adoption. Therefore in 2019, we set up TX as a vehicle to secure solid partnerships by systematically searching out actual, value-adding use cases, and by having the capability to offer solutions and services on top of the technology.At the height of the crypto hype, partnerships efforts across the space were focused on publicly telling stories about future collaboration. The new, down-to-earth approach is quite the opposite: once the enterprises really start building new capabilities by piloting new technology, they tend to keep quiet about it, and put NDAs in place to ensure their partners keep quiet about it too. Since the goal is no longer to produce news about partnerships every week, to an outside observer things may seem a bit quiet. This is the trade-off between talking about things and actually doing things, and in our view the latter is the only approach that can lead to actual substance, value creation, and serious adoption of the technology.TX enterprise adoption modelBringing in commercial drivers solved the paper partnerships problem: only serious enterprises are willing to pay for the work needed, and getting paid for the work actually enables TX to participate in those projects (as opposed to the unsustainable model of the project team having to spend time on partner projects for free). This approach has been effective in terms of bringing interactions to the table that are serious, concrete and commercially grounded. TX has the liberty to pursue prospective partnerships as they see fit, both pioneering a model for anyone to create a solutions business on top of Streamr, as well as allowing Streamr project resources to be spent more effectively on delivering the roadmap and advancing the vision.In tandem with this fresh partnership approach, this year we established a Growth team within Streamr. The team’s core objective will be to increase adoption across the board, with a particular focus on Data Unions in 2020. This objective will be accomplished via a multi-prong strategy of user feedback, research, nurturing the developer ecosystem, special project commissions and of course our ability to involve TX in enterprise partnerships.FinancesAt the time of writing, two and a half years into the project, around half of the funds raised in the token launch have been spent. The beginning of the project saw a spending peak as a result of initial set-up costs: legal and other crowdfunding-related expenses, team-building and setting up offices. Before the crypto market crash, spending was more liberal across the industry (anyone remember the boat party around Consensus 2018, where the organisers gave away two Aston Martins to random attendees?). While none of our project funds were lost in the crash, we have since introduced a more restrained approach to spending.Milestone 1 was significant, covering much more than one-third of the development work, and a little bit more than one-third of the project budget. We’re on track to complete the project within its planned schedule (five years) and budget, and we should see a reduction in our expenses towards the end, when development work starts to approach completion. As an example of how our work up until now will manifest in more sustainable spending, TX will make enterprise partnership efforts self-sustainable, which saves project funds and helps extend the lifespan of the tech infinitely. Mechanisms for funding the long-term maintenance of the technology far beyond the crowdfunded phase can also be included in the Network token economics and/or on the Marketplace — ideas that we’re exploring as part of the token economics research track.Community Fund / Growing the Streamr ecosystemThis year we also added another important layer to the Streamr Community Fund. We launched the fund in 2018 with 2,000,000 in DATA to empower community initiatives using our platform, and since then have funded several developers and projects with over 1,000,000 in DATA from the fund. But we realised that the one thing that DATA can’t buy is the kind of experience we have in our team at Streamr.Developers supported by the Community Fund have received advice from our skilled team about the tech and potential marketing strategies, and they can now receive guidance from members of the newly-formed Streamr Data Union Advisory board — industry leaders, veterans and academics who advocate for personal data ownership.UX of MyDiem, backed by the Community FundResponse to COVID-19The effects of the coronavirus pandemic are still unknown. Enterprises may withhold investment when it comes to exploring and piloting new technology in 2020. It’s possible that an economic downturn may impact the willingness to pilot cutting-edge technologies in the enterprise sector. Diminished demand also could impact TX as our partners face their own unique challenges, which may have a knock-on effect on maintaining self-sustainability. Thus far, Streamr remains robust in the face of this crisis, and if the situation doesn’t extend too far into 2021, we can remain on track with the progress we’ve made.ConclusionSo here we are — halfway through #buidling, with some major milestones behind us and well on our way towards the milestones ahead. These are uncertain times, there’s no doubt about that, but decentralization, self-sovereignty, and empowering people with control over their data and finances haven’t lost their importance. Quite the opposite, actually. With governments printing money for rescue packages, as well as leveraging personal data under martial law, these are even hotter topics than before. The innovations we come up with today will define the societies we live in tomorrow.We’ll continue to hold on to our values and the bets we’ve made, and keep working through Milestones 2 and 3 towards a more decentralized, more efficient, more empowered future.If you have any questions or comments about this update, be sure to join the AMA on the 11th of June at 15:00 CEST. Save the meeting link and post your questions in advance in this thread.Streamr project update, June 2020. was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 06. 02

News: US politician James F...

News: US politician James Felton Keith joins Streamr Data Union Advisory Board, among several othersThe US politician and entrepreneur, James Felton Keith (JFK), an outspoken advocate for fair remuneration in exchange for personal data, has joined Streamr’s newly inaugurated Data Union Advisory Board.The Advisory Board will guide Streamr in its endeavour to empower internet users through the use of Data Unions. Data Unions allow internet users to crowdsell their information for the first time in the internet’s history, whether it’s their musical preferences via a Spotify Data Union, or their browsing history via Swash, the first Data Union in the Streamr ecosystem. Instead of tech giants, individuals can reclaim control and ownership over their own data.As privately-commissioned research by Streamr has shown, internet users are indeed eager to sell their data. However, until recently, there hasn’t been a marketplace for private data vendors.“I believe that personal data is an individual’s property. And, as such, individuals deserve to receive a fair share of the value they’re co-creating. So far we’ve been lacking the infrastructure to do so, Data Unions are the way to go so everyone can receive a data dividend.” — JFKAlong with JFK, industry veterans and academics alike have joined Streamr’s effort to unlock the hidden value of personal data through Data Unions, and create a fair and just data economy. Other prominent members of the Streamr Data Union Advisory Board include the Italian economist Maria Savona, who is a professor of Innovation and Evolutionary Economics at the University of Sussex in the UK. Maria is a former member of the High Level Expert Group on the Impact of Digital Transformation on EU Labour Markets for the European Commission.“One of the main challenges of the data economy is unpacking the black box of large tech’s business models, in order to understand the massive private value concentrations stemming from personal data. We need to build on the existing European legal frameworks we’ve been provided with, like the GDPR, to go beyond protecting privacy, allowing individuals to have broader agency on personal data, and be given choices on whether and how to share their data freely through intermediaries such as Data Unions.”RadicalxChange’s president, Matt Prewit, has also signed up to the board. At RadicalxChange, he advocates for a reform of the data economy and is a known voice in the Web 3 space. And this, in essence, mirrors what the Data Unions framework is doing — bringing the best of the Web 3 space into the Web 2 stack; decentralizing power through improved user control and the ability to monetise data evenly.“Currently, we’re seeing a mismatch between value creators online and those who extract value online. I think that, through the integration of Web 3 technologies, we can rebalance the power dynamics of today’s internet. Through Data Unions, value creators get the opportunity to reclaim their ownership and to get remunerated fairly.”Other members of Streamr’s new Data Union Advisory Board include Arnold Mhlamvu, Brian Zisk and Peter Gerard — all three music and film industry veterans who will help Streamr make Spotify or Netflix Data Unions a new normal. Mhlamvu launched Beatroot Africa, the fastest-growing digital content distribution company in Africa. Zisk produces conferences, including the Future of Money Summit, the SF MusicTech Summit, and other events including hackathons. He is also a seed investor and advisor to Chia Network. Gerard is an award-winning filmmaker and entrepreneur, and a leading expert in marketing and distribution for films and series.With Alex Craven, the board has acquired a seasoned Data Union advocate. Since 2014, Alex has been exploring the issues surrounding personal data and trust, working on a Data Mutual Society concept. He is now the founder of the gov-tech startup The Data City, and has joined Streamr previously at Mozfest, to talk about Data Unions live on stage as part of the ‘Should we sell our data?’ panel.And last but not least, Davide Zaccagninihas also joined the Streamr Data Union Advisory Board. Davide is a former surgeon and informatics researcher at MIT. While covering leadership positions in US startups and corporations, he served on the Advisory Board of the W3C. He will help Streamr navigate the complex world of standards and regulations when it comes to introducing the Data Union framework as a new global tool.News: US politician James Felton Keith joins Streamr Data Union Advisory Board, among several… was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 05. 20

Dev Update April 2020

Welcome to the Streamr dev team update for April 2020. A lot has happened in the last month, so let’s kick off. Streamr has:Released a new version of MarketplaceIntegrated Uniswap exchange functionalityTested multi-tracker support for the NetworkContinued WebRTC implementation for the NetworkA quick editor’s note: we are adding a new section to our monthly dev update titled Deprecations and breaking changes. As you might have guessed, it is to keep all developers building on top of Streamr ecosystem informed about upcoming deprecations (features or components not supported anymore) and breaking changes (alteration in functions, APIs, SDKs and more, that require a code update from the developer side).The newly deployed Marketplace contains a suite of analytics that users can explore on published Data Union products — number of users belonging to a particular Data Union, aggregated earning, estimated earning potential per user and more. Here you can see the example for Swash, published on the Marketplace. Note that Swash is still in its beta phase and the Data Union Product has been migrated to a newer version of a Data Union smart contract, so the current metrics don’t show the full picture.Additionally, we also deployed a long-awaited Uniswap integration on the Marketplace. Thanks to this decentralized exchange (DEX), data buyers now can now use either ETH or DAI to pay for a subscription, in addition to DATA coins. This is an important milestone because it simplifies the purchase process, which had caused some friction for new users.Recently, the Network developer team finished testing a multi-tracker implementation. For any readers who are not yet familiar with the role a tracker plays in the Network, our core engineer Eric Andrews wrote the following in his recent blog post on the Network workshop:An important part of the Network is how nodes get to know about each other so they can form connections. This is often referred to as ‘peer discovery’. In a centralized system, you’ll often have a predetermined list of addresses to connect to, but in a distributed system, where nodes come and go, you need a more dynamic approach. There are two main approaches to solving this problem: trackerless and tracker-based.In the tracker-based approach, we have special peers called trackers whose job it is to facilitate the discovery of nodes. They keep track of all the nodes that they have been in contact with, and every time a node needs to join a stream’s topology, they will ask the tracker for peers to connect to.A representation of the physical links of the underlay networkNow that we have finished testing the tracker model, the next step is to try to create an on-chain tracker registry and let Network nodes read the tracker list directly from there. This can be accomplished via a smart contract on the Ethereum network, so that this whole process of peer discovery can be handled in a decentralized way. In future, richer features could be deployed for the tracker registry, such as reputation management and staking to lower possibility of misbehavior or network attacks. The team made further progress on the Network side with the gradual implementation of WebRTC for the nodes. We recently ran an experiment, running over 70 WebRTC nodes on Linux local environment, and results were promising. That gave us additional assurance to proceed with the full implementation.Regarding the Data Union development progress, we noticed there have been some performance issues and potential bugs on the balance calculation, due to Data Union server restarting. We sincerely apologize for the inconvenience caused, and we are working to improve the Data Union architecture to guarantee higher stability before the official public launch later this year.Deprecations and breaking changesThis section summarizes all deprecated features and planned breaking changes.June 1st, 2020: Support for Control Layer protocol version 0 and Message Layer protocol versions 28 and 29 will cease. This affects users with outdated client libraries or self-made integrations dating back more than a year. The deprecated protocol versions were used in JS client libraries 0.x and 1.x, as well as Java client versions 0.x. Users are advised to upgrade to newest libraries and protocol versions.June 1st, 2020: Currently, resources on Streamr (such as streams, canvases, etc.) have three permission levels: read, write, and share. This will change to a more granular scheme to describe the exact actions allowed on a resource by a user. They are resource-specific, such as stream_publish and stream_subscribe. The upgrade takes place on or around the above date. This may break functionality for a small number of users who are programmatically managing resource permissions via the API. Updated API docs and client libraries will be made available around the time of the change.Further away (date TBD): Authenticating with API keys will be deprecated. As part of our progress towards decentralization, we will eventually end support for authenticating based on centralized secrets. Integrations to the API should authenticate with the Ethereum key-based challenge-response protocol instead, which is supported by the JS and Java libraries. At a later date (TBD), support for API keys will be dropped. Instructions for upgrading from API keys to Ethereum keys will be posted well in advance.Further away (date TBD): Publishing unsigned data will be deprecated. Unsigned data on the network is not compatible with the goal of decentralization, because untrusted nodes could easily tamper data that is not signed. As the Streamr Network will be ready to start decentralizing at the next major milestone (Brubeck), support for unsigned data will be ceased prior to reaching that. Users should upgrade old client library versions to newer versions that support data signing, and use Ethereum key-based authentication (see above), which enables data signing by default.Below is the more detailed breakdown of the month’s developer tasks. If you’re a dev interested in the Streamr stack or have some integration ideas, you can join our community-run dev forum here.As always, thanks for reading.NetworkMulti-tracker support is done. Now working on reading tracker list from smart contractMoving forward with WebRTC implementation after local environment testingContinuing fixes for Cassandra storage and long resend issues.Data UnionsSome Java client issues were affecting Data Union joins, but these should all be fixed nowImproved Data Union Server monitoring. Join process is being continuously monitored in productionTeam started implementing storing state snapshots on IPFSJS client bugs fixes to solve problems with joins in Data Union serverData Union developer docs are being finalisedCore app (Engine, Editor, Marketplace, Website)Implementing UI for managing buyer whitelist for MarketplaceNew Marketplace version has been deployed with Data Union metricsNew product views and Uniswap purchase flow are now live.Dev Update April 2020 was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 05. 19

Our thoughts on the EU Data...

A few weeks ago some team members of the Streamr project attended the MyData Global community meeting where the recent EU Data Strategy paper was discussed in detail. For those of you not familiar with the organisation, MyData Global is an NGO, working on transforming the EU’s GDPR from legal into actionable rights. We recently became official members and signed the MyData declaration, which promotes “moving towards a human-centric vision of personal data.”Why is the EU Data Strategy important to us?The Data Union framework we’re developing here at Streamr builds on the premise outlined in the GDPR’s article 20 on data portability, namely that:“The data subject shall have the right to have the personal data transmitted directly from one controller to another.”Data portability grants us the right to take the data we’ve created on one platform with us to another platform of our choosing. However, the law grants platform providers a 30-day period to make data “portable” and furthermore does not give concrete guidelines on the format in which the data is handed over. But what if people want to port, or sell their data in real-time? And yes, they do.Legal rights need to become actionable rightsThis is one of the topics addressed by the new EU Data Strategy. MyData’s board member Teemu Ropponen, argues that we need to:“Move from formal to actionable rights. The rights of GDPR should be one click rights. I should not go through hurdles to delete or port my data. We need real-time access to our rights.”Individual users should have the agency to control data about themselves. At the same time, we recognise the immense potential open access to data would bring. Digital businesses require the use of personal data but, beyond that, researchers, startups, SMEs and governments can profit from a more democratised, open access.MyData Global has a goal to develop a fair, prosperous, human-centric approach to personal data. That means that people get value from their own data and can set the agenda on how their data is used. In order to make this a reality, the ethical use of personal data needs to be promoted as the most attractive option to businesses.Europe is falling behind in the Data EconomyViivi Lähteenoja, another MyData Global board member, pointed out during her presentation that Europe realises that it’s falling behind when it comes to its share in the data economy. But there is still time to change this. As stated in the recent EU Data Strategy paper:“The stakes are high, since the EU’s technological future depends on whether it manages to harness its strengths and seize the opportunities offered by the ever-increasing production and use of data. A European way for handling data will ensure that more data becomes available for addressing societal challenges and for use in the economy, while respecting and promoting our European shared values.”Data is absolutely crucial in solving today’s issues. Just consider the apps that are currently being built to tackle the outbreak of the Covid-19 pandemic. Developing the right tools for our society will become much easier when access to high-quality data sets becomes far easier. One important point in this will be the facilitation of data-sharing on a voluntary basis.The EU wants to tackle this problem head-on by the creation of a European data space. This is not supposed to be about ‘one platform to rule them all’, but an ecosystem of ecosystems where all data is dealt with in accordance with European laws and values. Its creation is one of the main goals of the European Data Strategy:“Those tools and means include consent management tools, personal information management apps, including fully decentralized solutions building on blockchain, as well as personal data cooperatives or trusts acting as novel neutral intermediaries in the personal data economy. Currently, such tools are still in their infancy, although they have significant potential and need a supportive environment.”Personal Data Spaces — The EU’s version of Data UnionsTo create better governance and control around personal data, the EU will create so-called Personal Data Spaces. These spaces will serve as neutral data brokers, between internet users and platform providers. But, as the strategy paper notes, there is currently a lack of tools for people to exercise their rights and gain value from data in a way that they want.At Streamr, we have little doubt that our open source Data Unions framework will provide just the tools the EU is searching for and will therefore play a central role in bringing about this vision.But here’s the catch. Sitting on the frontlines of data portability, we know that in order to make these tools a reality, the law needs to be strengthened. And soon. GDPR Article 20 needs to be amended through the European Data Act, which is to be passed in 2021, so that it allows users one-click rights. As the board members of MyData rightfully note: “Right now data portability is not good enough, what is needed is live portability.”The EU Data Strategy was published in February and you can take a look here. If you’re interested in watching some of the presentations from the last MyData meeting, take a look here.Our thoughts on the EU Data Strategy was originally published in Streamr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Streamr network

20. 05. 06

Transaction History
Transaction History Address
Binance Short cut
Coinone Short cut
Security verification

There is security verification of the project
Why security verification is necessary?

Read more

comment

* Written questions can not be edited.

* The questions will be answered directly by the token team.

Information
Platform ERC20
Accepting
Hard cap -
Audit -
Stage -
Location -