Click history
Steve Ellis...you so sneaky
Other urls found in this thread:
warosu.org
github.com
twitter.com
Link 1000 sats eoy
Nice double dubs but pls delete and come in the discord.
Which discord?
Are you referring to the work on the aggregate?
He's still going.
Wow, is he working through the night? What a dedicated motherfucker.
Also, what am I looking at here?
some user earlier said that aggregation was done and would be added
guess he wasnt fuddin
He's setting up / editing all the aggregation work.
Link to thread pls
Oh fuck.. so all that’s really left now that we’re unsure of is the reputation system, right?
think about it...Steve is in US..which is 12:30AMto 3:30AM currently. Codeship will include whatever has been in the private repo. I think we find out this week how close we really are to main net.
I do believe reputation is in the private repo because the team stated the private repo was not node work. The reputation service will be 3rd party by multiple reputation providers which falls under non node work.
I can only get so erect
Node reputation is non nodework?
How?
Well boys, looks like its really happening tonight
It doesn't deal with the architecture of the actual nodes themselves
>The reputation service will be 3rd party by multiple reputation providers which falls under non node work.
Basic FUD here: How is this not kicking the can of trusting third parties from "trusting the centralized oracle" to "trusting the centralized reputation provider"?
this
>warosu.org
STEVE ON BIZ CONFIRMED.
They have not gone into to too many details but I suspect they will allow quite a few reputation providers to "decentralize" reputation. For all we know, anyone can start a reputation prov. service.
>warosu.org
Then maybe, just maybe he can answer this FUD
Without spilling the beans on their super secret reputation system.
Checkd
The thing is, I have thought on this a lot and I think I know how to solve this. I want to know which approach CL will take, and if it's not truly decentralized I will offer them the solution in return for LINK tokens.
Hail Steve
whatever, you fat fag
Namecalling isn't changing the fact that a centralized reputation system is inherently flawed and would make CL a failed project. No disrespecting Sergey, Steve or Thomas but a working reputation system is extremely difficult to implement. Maybe that's where Ari Juels comes in.
Smart contracts can be written that keeps track of jobs completed, by whom and for which source of data, and it can be contributed too by an arbitrary number of chainlink nodes. Reputation provider = reputation smart contract + consortium of nodes responsible for maintaning the smart contract. Decentraloized af m8
but is it feasible though?
>node operators
>reputation validators
>smartcontract auditing
All of this will incorporate fees of some sort
I highly doubt it isn't something they haven't come with a viable solution with.
Yea, the reputation should be simply built into the blockchain. There's no centralization. Just record each contract to the blockchain and the node # which provides the data or something similar. it's all public.
Steve, if you're in this thread, I just want to let you know that while you have been up late at night coding for chainlink, I too have been been up into the wee hours of the morning every night right along with you smoking weed, drinking whiskey, reading biz, and talking to cam whores for the last 12 months as I eagerly follow Link's progress on biz. I unfortunately do not possess coding skills. You, Sergey, and the team are my heroes. Don't quit. Never give up. Let's get filthy white man rich together.
I don't think you guys realize how much work will need to go into the aggregation contract.
You need to find a robust way to assess the validity of any type of data returned by X different nodes.
If you're asking for a deterministic API info, like the GPS coordinate of a town from a single source, then it's simple.
But what if you need stock price tickers? Those prices can fluctuate by the ms and from source to source, how do you specify a validity range that is acceptable for both the requester and the provider?
What if you're asking for composite data, where one value needs to be averaged across API providers, and one is highly volatile, etc...
When people ask those questions on slack/gitter they're just told that aggregation will magically solve it, but nothing can be said about it atm because the contracts aren't ready.
I honestly don't see it happening except for maybe so super basic requests at first.
>secure oracles by using a smart contract which itself will need a secure oracle.
This is stupid.
Chainlink doesn't have its own blockchain so that's not possible.
Nice attempt at covering your trail Steve, but we've already caught on. Link $1000 EOY.
The token uses ethereum faggot. They can write it to that blockchain. If a node doesn't provide the right data the reputation of that node will get recorded on the blockchain as a failure.
I disagree, this is easily solvable by moving the problem to the person who requests the data. F.e. you name stock price tickers as a problem. But what do you want to know? Simply asking the price isn't correct, you ask for the price at a certain time, which is deterministic and thus doesn't change.
You can give me more "problems" and I will try to explain how easy it is to aggregate them (theoretically, coding adapters for all these usecases is a lot of work though).
Who writes the data to the blockchain though? The entity who determines what reputation goes to the blockchain is centralized. And if you do decentralization of your reputation, you might have different reputation providers wanting to write different things to the blockchain, are you going to aggregate those as well?
Yeah aggregatation is super hard. I think some of it will fall onto the end user side. There will probably be a lot of aggregatation functions but it think it makes sense in some cases to allow users to request raw data and aggregate it on the contract side. Although I'm not sure how consensus would work in that case. Not sure how much they'll leave the burden on the user though. It's just like ethereum leaves the job of contract security to the contract writers rather than limiting complexity.
> you ask for the price at a certain time, which is deterministic and thus doesn't change
1. the price will differ depending on the exchange
2. you can't really specify a time, unless you're querying the exchange's price history. if you want real time data, price fluctuates by the microsecond and depending on how much time your node takes to send and receive the request, it may be different
3. even if you're querying the exchange's price history, APIs are buggy and don't always serve the same data (from experience this is especially true of crypto exchanges). So querying at 9:00am for historical price A the day before might give you X, but somebody querying for the same price a few hours later will get Y
> coding adapters for all these usecases is a lot of work though
yes, data requester would need to code a specific contract (but then the node providers would have to inspect it, which is unpractical and dangerous as smart contract auditing is not somebody anyone can do) or the node operator would need to have an adapter (which means he would need to know how to program one which is non-trivial. this might work for commonly requested data such as crypto/stock prices, but it's unpractical for more specialized cases unless you have maybe like a marketplace for adapters and an incentive to get node operators to create & use the adapters).
The contract writes the data to the blockchain. There is no entity deciding the reputation. Either the data was successfully used by the smartcontract or it wasn't. That gets recorded, And from that record, you can use a blockchain explorer to read the reputation of the nodes providing the data which is written to the blockchain when the contract is minted.
I'm not trying to be a dick. It seems pretty clear but you're seeing something else. Am I missing something fellow linkie? Steve, you here? correct me if I'm wrong.
if you leave the burden on the user, there's no way to reward or penalize nodes.
either the user just accepts all the data and uses its own algorithm to sort the good from the bad (then there's no real point in using chainlink)
or the user is able to set its own indivualized criterias for what constitutes good data, and that's a clusterfuck because every contract has to be audited by the data providers to ensure they won't get fucked over.
>1. the price will differ depending on the exchange
Hence aggregation, you want the average price on all exchanges. If you want the price on 1 exchange you either use a centralized oracle (direct API from the exchange) or you request the price from that exchange explicitly. For example, there is a difference between asking the price of BTC/USD or the price of BTC/USD on gdax. I think Chainlink's strength lies in the first kind of requests, as the second one is just using a decentralized oracle because you're too lazy to use your own API.
>2. you can't really specify a time, unless you're querying the exchange's price history. if you want real time data, price fluctuates by the microsecond and depending on how much time your node takes to send and receive the request, it may be different
UNIX timestamp. Price fluctuations are once again the reason you aggregate.
>3. even if you're querying the exchange's price history, APIs are buggy and don't always serve the same data (from experience this is especially true of crypto exchanges). So querying at 9:00am for historical price A the day before might give you X, but somebody querying for the same price a few hours later will get Y
This is something you can't solve with oracles, just like you can't solve your internet connection breaking down just before you push "request BTC USD price". It's a problem blockchain technology can't solve.
> maybe like a marketplace for adapters and an incentive to get node operators to create & use the adapters).
Exactly, I think this is what will happen.
Morning Johnny, have you quit your full time job yet?
exactly.. same goes for any smart contracts they write. would be insane to share that code.
the stuff thats public is generic node running infrastructure
>or the user is able to set its own indivualized criterias for what constitutes good data, and that's a clusterfuck because every contract has to be audited by the data providers to ensure they won't get fucked over.
I think Chainlink will give the framework but users can tweak some parameters.
But Chainlink is blockchain agnostic. How do you connect all the reputation that's scattered around 100s of different blockchain platforms?
> I'm not trying to be a dick
And I'm not trying to be a brainlet, I just have a lot of questions and doubt about how CL works, because the whitepaper isn't specific enough for me.
I don't get what you mean?
Do you have any idea how blockchain works? Though the exact contents of the smart contract through chainlink may not be public, the fact that there is a smart contract and its status will of course be known.
But the smart contract doesn't know the data is correct? It ASSUMES the data it gets from the oracle is correct and works with that. Any and all smart contract will just use the data it receives, and the judging of which nodes were correct and which were false will be done by the Chainlink network.
It's weird. in the beginning I was fudding Link to accumulate more. which I did ultimately when it was around 13 cent. I amassed 300K Link. I definately think I have enough to make it. the problem though is that I can't stop fudding my own investment where I am literally all-in with my life savings. I designed some of the most hated and posted memes regarding Chainlink. again... I am all-in with my live savings and I have no intention whatsoeva to shill this project. instead I went to insane lengths to meme fud whenever I can. sometimes I sit a whole day in front of the screen and I FUD FUD FUD FUD. I don't something is wrong with me. but since I have invested in Chainlink I feel verydifferent. my behavior makes absolutely no sense... yet I am 100% sure I have to FUD my own investment.
if you were here in October you'd know that edging is a cornerstone of the SmartContract.com company. There is no hype, only energy spent towards working on the product. In fact edging was what Sergey focused on for his Philosophy degree. This explains why partnerships are being kept secret and the suddenness of the inevitable singularity. When the singularity happens, be sure to open the Citizen app if you live in the SF bay area and look for an incident titled "office building flooded with semen" as Sergey et al will no longer be able to contain themselves. Sergey will blow the biggest load though as he's expressed a greater propensity of a hard on for decentralization. In fact in his interviews the first word Sergey says to candidates is "decentralization." No sentences or words around it. He looks intensely at their crotch, and if the candidate doesn't get wet or hard in 30 seconds the candidate is rejected.
With this information the reasoning is clear: a significant partnership has been secured, and the smartcontract team has been vigorously doing laundry or buying new underwear. This isn't sustainable however because the massive volume of pre cum will ruin the dry cleaning machines. It's only a matter of time until the laundromats find out whats going on. Hence it is a race against ejaculation, and a rigorous mental battle to keep their enthusiasm in check.
If he is. Counting on you an have confidence in you. Holding strong.
The LINK token is used for executing the release of data/funds and/or contract execution so even though a payout might be on say IOTA, the actual purchase of the data gets recorded on Ethereum through the chainlink network so the reputation for who provides the data getting paid out in IOTA is still on the Ethereum blockchain or whatever blockchain the team wants to continue hosting the chainlink network (Ethereum works best and is the best choice for the foreseeable future. IMO).
>
They're building in a consensus system to deal with disputes, so your concern actually highlights the benefit of chainlink in that it helps provide only the most correct info to the smart contract to prevent fraud before a contract executes.
> Hence aggregation, you want the average price on all exchanges.
That's not aggregation, that's just averaging the price on all exchanges. How do you determine what's an acceptable average and what is not? Since the value (both real time & historical, albeit less in the second case) is subject to change, you'd have to determine a window of acceptable values. That can be unpredictable and easily turn predatory for the node operators.
> UNIX timestamp. Price fluctuations are once again the reason you aggregate.
UNIX timestamp does not solve anything if you're querying real time data. You may send a request at the UNIX time specified in the contract but if you're in Asia and the endpoint is in Europe, your request will be served later than a node in Germany and the price might differ. As mentioned above, averaging (what you call aggregation) does not solve this issue.
> This is something you can't solve with oracles
If that is so, the use cases for ChainLink are extremely limited
> Exactly, I think this is what will happen.
This is what I hope will happen. But to get to that point, you need to kickstart the whole ecosystem with both your node operators and your data requesters, supposing you have enough to create a real marketplace. To say "it will happen" is wishful thinking, it may happen under the best possible conditions. (I say this as someone whose portfolio is 90% link). And if it does happen, it will be in many years. People keep asking about mainnet, but mainnet doesn't matter as long as you don't have that ecosystem which will take years to develop.
Advice to LINK holders.
Try a pool cleaner vacuum while underwater, especially with a heated pool, it will give you the best orgasm of your entire life. the fans rapidly but gently smack the head of your dick while giving really strong suction. obviously stick your fingers in first to make sure it's safe, not every pool vacuum is the same. I've had blowjobs from 3 different women and 4 different men, I've used vacuums, cock-pumps, fleshlights, vibrators... and NOTHING compares to the pool cleaner. I'm not even fucking kidding right now, if you get the chance, try it. the only thing that is even remotely close to how good that pool vacuum felt was straight up vaginal sex with this fat chick who had a really warm snatch, it was like sticking my dick into a wet loaf of banana bread straight out of the oven, and yes this fucking pool cleaner vacuum was better than that. I don't own a pool or else I'd be doing it every day. unfortunately the owner of the pool caught me doing it so I'm not allowed to be within 1000 feet of his house anymore but it was so fucking worth it, I'm telling you that fucking pool vacuum is like heaven. honestly the only reason I even want to make it with LINK is so I can afford my own house with a heated pool and of course a pool vacuum. I can't wait to buy a dozen different brands and styles of pool cleaners and fuck them all. I live for that day to come.
Advice over.
What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my philosophy class, and I’ve been involved in numerous secret blockchain projects, and I have over 300 confirmed smart contract transactions cleared. I am trained in Javascript and I’m the top speaker in the entire cryptosphere. You are nothing to me but just another pajeet. I will wipe your wallet out with the precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of hackers around the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your portfolio. You’re fucking finished, kid. I can be anywhere, anytime, and I can destroy your networth in over seven hundred ways, and that’s just with my bare hands. Not only am I extensively trained in hacking, but I have access to the entire arsenal of the Enterprise Ethereum Alliance and I will use it to its full extent to wipe your miserable wallet off the face of the blockchain, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You’re fucking broke, kiddo.
>That's not aggregation, that's just averaging the price on all exchanges. How do you determine what's an acceptable average and what is not? Since the value (both real time & historical, albeit less in the second case) is subject to change, you'd have to determine a window of acceptable values. That can be unpredictable and easily turn predatory for the node operators.
How is aggregation numerical values not some kind of averaging? You can give a weighted average, with weight given by reputation. There are tons of techniques like this that are used all over the world when dealing with scientific measurements.
>UNIX timestamp does not solve anything if you're querying real time data. You may send a request at the UNIX time specified in the contract but if you're in Asia and the endpoint is in Europe, your request will be served later than a node in Germany and the price might differ. As mentioned above, averaging (what you call aggregation) does not solve this issue.
I'm not seeing the problem here. If I call for the price at 1529916539 (right now) what does it matter where I ask it from? It's the same time everywhere. Sure, in Asia or Germany they might receive my request a fraction of a second later, but they still will see the same price at that exact moment. Price fluctuating by microsecond can also be specified: if you want in on the microsecond, you send a request for a price on a certain microsecond 1529916539.135 for example.
>If that is so, the use cases for ChainLink are extremely limited
ChainLink won't solve hardware breaking down or websites having errors, but the decentralization will solve it partly because you don't depend on 1 source anymore.
>And if it does happen, it will be in many years
Agreed. Chainlink usage also requires sharding to be implemented on Ethereum imo.
>Agreed. Chainlink usage also requires sharding to be implemented on Ethereum imo.
If we can execute future smartcontracts, how many do you think are getting executed daily? I don't disagree for larger institutions down the road, but initial implementation will be easily handled by Ethereum.
> How is aggregation numerical values not some kind of averaging? You can give a weighted average, with weight given by reputation. There are tons of techniques like this that are used all over the world when dealing with scientific measurements
You're describing aggregating data that's already received and supposing that all the data is valid. What if the node with the highest reputation gives you bad data? Your whole average is skewed and you have no way of knowing that data was bad. You can't penalize or reward user based on the type of aggregation you are describing. It works for scientific measurements because the actors act in good faith. It doesn't work for crypto where people will try to exploit the system for personal gains.
The only way around I see around it is specifying that the data has to be within a certain range from the mean of all the data returned by all the node (you can weight by reputation there if you'd like). But that means that certain data provider will necessarily be penalized (since threshold is relative to the mean) even if the difference is minute and they are acting in good faith. This also poses the question of how to handle non numeric values (strings, dates...)
> I'm not seeing the problem here. If I call for the price at 1529916539 (right now) what does it matter where I ask it from?
Yes because not all exchanges have an API that let you query for historical data. Very often you can only request real time data. That means that the only thing you can do is specify in your Chainlink job at what time the request should be send, but that still leaves a lot of leeway for fluctuations based on lag.
If the exchange supports historical price query, great. But as mentioned, even historical price data sometimes fluctuate depending on when you send your request or if the API is buggy.
So the only way to solve the problem is to solve the problem above, which I don't see a practical, scalable solution for (but I'll be happy to be proven wrong).
>Initial implementation will be easily handled by Ethereum.
Probably. All the more so because CL will work with any and all smart contract "platforms", and only the Link token transactions are tied to ETH. And things like node payments can be done in batches for longer-term contracts like website uptime surveillance n shiiieeeet.
>Very often you can only request real time data
What?
and buy weycoin
kek
$0 or $1000 EOY
If the node with the highest reputation gives bad data, the concept of what is right and wrong should be put in question. The reputation system should work in a way that high reputation gives validity. A high reputation node can indeed try to exploit the system, but risks loosing its reputation in the way.
>The only way around I see around it is specifying that the data has to be within a certain range from the mean of all the data returned by all the node (you can weight by reputation there if you'd like).
How is this different from what I propose? You aggregate data and use an algorithm to determine the consensus. Then you penalize everyone who was wrong and pay out the ones who were right. The parameters of which answer was right enough and which was too far off can all be specified by the smart contract, e.g. two standard deviations of is too far.
Non numeric values can all be translated to numeric values. Or you work with the mode of the data, like for booleans.
Then third parties will pop-up that track those sites to keep track of the historical data.
If I want to know the BTC/USD price right now via Chainlink, I don't want to wait for 10 minutes or however long it takes for the link transaction to take place to power the smart contract.
>If I want to know the BTC/USD price right now via Chainlink, I don't want to wait for 10 minutes or however long it takes for the link transaction to take place to power the smart contract.
Smart contracts are automatic.
Don't they need to wait for the oracles to put in LINK as collateral ?
That process will also largely be automated.
You as the client indicate an assignment and a certain collateral, and nodes that have their parameters set to that collateral and that type of request (maybe even specific API) will automatically engage.
Also, you don't use oracles just to "know" the BTC/USD price. You can just search that on the internet.
>Chainlink usage also requires sharding to be implemented on Ethereum imo.
yep, this is why mainnet launch will not birng in the singularity. mainnet will let customers test it and build some trust in the system, then when eth scales, and the network starts to get useful we will se adoption and singularity.
maybe eth scales before mainnet!
Yeah nah
Thank you for making this pasta, that was a good thread
>$1000 EOY
Last three digits of my post are the link price eoy
Nice
Nice fud.
which could actually be good.
fees increases costs = higher price
noice
Great thread and good debates about merits of Link. Looks like project continues to make excellent progress. Will continue to accumulate and Hodl. I am comfy as can be because clearly ETH needs smartcontracts and Link to maximize it's value. Don't be fooled, Vitalek is fully supported and invested in Link. Just my opinion of course.
WOW stop with that meme magic user.
> How is this different from what I propose? You aggregate data and use an algorithm to determine the consensus. Then you penalize everyone who was wrong and pay out the ones who were right. The parameters of which answer was right enough and which was too far off can all be specified by the smart contract, e.g. two standard deviations of is too far
Yeah, but that's a problem for several reasons:
- The algorithm to aggregate data has to be created specifically for the task, it's not something that the LINK devs can put out a generic algorithm for. This means the node operator would have to review the code first to understand the terms of the contract and to make sure they're not delivering data to a provider that will only reward 1 guy and penalize everybody else. This supposes that the data requester has the time/money/will to create a specific smart contract and that the data providers have the time/skills/will to audit that contract first. Very unrealistic.
- Two standard deviations too far, or any "standard deviation" measure may sound right, but in practice this is dependent upon the average. What if everybody returns good data? Then you still have too penalize the nodes that are the furthest away from the mean, even if it's a very minor difference. What if several nodes return bad data and the mean gets skewed? Then several nodes that returned bad data will get penalized.
>Non numeric values can all be translated to numeric values. Or you work with the mode of the data, like for booleans.
Yes, but how do you create a consensus algorithm based on a string converted to a numerical value ?
Say a weather API returns a string with the weather, followed by a timestamp. (e.g. "Tomorrow is Hot. Requested at 2018-08-08:08:08:08)
All nodes will have a slightly different string, and you can't reliably use arithmetic to determine who's right. At best you'd have to create something using Levenshtein distance or more advanced NLP.
Of course you can have an adapter with a parser for the string, but 1. you have to ask node operators to use your adapter or create their own (and therefore create an incentive for it) 2. how does that work, say, if your transferring base-64 encoded binary data or more complex string?
It's nice to have this discussion user.
There can not be a single aggregation algorithm, chainlink is way too multifunctional for this. I expect they make several possible aggregation methods available that the requesting user can ask to be used.
If multiple data providers give the "wrong" data and this skews the mean, than this wrong data is actually the correct data. If Germany wins this world cup but everybody on the internet and in the paper prints that Brazil wins the world cup, then Brazil has won the world cup. The data may be wrong, but if it's accepted as reality that's the way it is. I guess I'm saying the person or the company that manages to control all of LINK and it's reputation, controls reality for smart contracts. We just have to trust game theory that nodes will give trustworthy data.
Now the case when everybody returns good data, that's a good one. I guess the requesting user could specify how exact the result needs to be. Like for example if you ask for BTC/USD you need to get withing 5 cents. Then if you are less than 5 cents from the aggregated mean OR you are less than two standard deviations you don't get a penalty.
Last I checked, Binance only allowed you to query something like "ticker/price/SYMBOL", which returns the price of crypto X at the time you make the request. You can't request the price for crypto X at a specified date and time in the past. Maybe this has changed, but I believe this is still the same for most other exchanges.
>Then third parties will pop-up that track those sites to keep track of the historical data
Maybe such sites will for data that is highly in demand, but it won't for a lot more use cases that are more specific and therefore limit the adoption of the network. And if you need a centralized sites to aggregate data for you, it kinda beats the purpose of having a decentralized network to query the data. Now the site doing the aggregation can easily manipulate the data. You wouldn't just need one site to offer the historical data, you'd need several for the operation to make sense and have some security. It sounds very redundant and inefficient, not mentioning it's doubtful that there will be a high enough incentive to have several of those sites on the market in the first place.
>Yes, but how do you create a consensus algorithm based on a string converted to a numerical value ?
>Say a weather API returns a string with the weather, followed by a timestamp. (e.g. "Tomorrow is Hot. Requested at 2018-08-08:08:08:08)
>All nodes will have a slightly different string, and you can't reliably use arithmetic to determine who's right. At best you'd have to create something using Levenshtein distance or more advanced NLP.
>Of course you can have an adapter with a parser for the string, but 1. you have to ask node operators to use your adapter or create their own (and therefore create an incentive for it) 2. how does that work, say, if your transferring base-64 encoded binary data or more complex string?
I don't think Chainlink can automate a description of the weather. Aggregating "the weather is hot" or "it's fine" or "it's too hot" is impossible unless you use AI. You could aggregate more concrete strings, such as names. Who won the world cup 2014? You can check for Germany or germany, and using the mode to aggregate. Everything else is wrong, even if it's just misspelled. Or you could provide a list of possible answers (all countries in the world) and give numerical values to them, again using the mode to determine the consensus. This is possible for every question with a discrete answer-pool.
If you ask chainlink for "what's the best poem of Oscar Wilde" you will need an insane aggregation system. I don't think that's feasible.
But isn't 99% of all functionality numerical values or booleans?
>Maybe such sites will for data that is highly in demand, but it won't for a lot more use cases that are more specific and therefore limit the adoption of the network. And if you need a centralized sites to aggregate data for you, it kinda beats the purpose of having a decentralized network to query the data. Now the site doing the aggregation can easily manipulate the data. You wouldn't just need one site to offer the historical data, you'd need several for the operation to make sense and have some security. It sounds very redundant and inefficient, not mentioning it's doubtful that there will be a high enough incentive to have several of those sites on the market in the first place.
If the network takes off, this would be an easy way to offer data to chainlink nodes. I think there would be hundreds of sites tracking binance prices. It's redundant and inefficient, but just because binance doesn't offer historical requests. That's their problem, not chainlink's.
I am enjoying the discussion too and wish there was more people debating in LINK threads instead of just the usual $1000 EOY
> If multiple data providers give the "wrong" data and this skews the mean, than this wrong data is actually the correct data.
That is true, I just wonder if this may not put off a lot of people interested in requesting data through LINK because of the security risk.
> Now the case when everybody returns good data, that's a good one. I guess the requesting user could specify how exact the result needs to be. Like for example if you ask for BTC/USD you need to get withing 5 cents. Then if you are less than 5 cents from the aggregated mean OR you are less than two standard deviations you don't get a penalty.
Yes you can come up with accommodating solutions for all party involved, but then you have to create new contracts and the node operators will have to audit it to make sure they're not getting ripped off and you can't expect all node operators to have solidity skills and smart contract auditing skills...
>That is true, I just wonder if this may not put off a lot of people interested in requesting data through LINK because of the security risk.
This is the basic idea of the FUD that was copy pasted a while back about someone explaining chainlink to his boss, who didn't understand it and just decided to use oraclize instead. I think we shouldn't underestimate the power of reputation and decentralization. I might be extremely costly to risk your reputation, just to give bad data to a smart contract you don't even know. But it's also the reason why I really want main net to come quick, I have so many questions about the project and there is just not enough information as of now.
>Yes you can come up with accommodating solutions for all party involved, but then you have to create new contracts and the node operators will have to audit it to make sure they're not getting ripped off and you can't expect all node operators to have solidity skills and smart contract auditing skills...
That's also a valid point. I don't have an answer then.
Kek'd
>Yes, but how do you create a consensus algorithm based on a string converted to a numerical value ?
Quite easily actually but your example is invalid, any contract would be based on a temperature or for example rainfall level with specified units (metric or imperial). That being said their is and has been for a decade conversion casting which allows a sting like '17.02" to be cast as a numerical. It sounds to me that you are trying to talk like a dev without ever having been one.
Does this mean mainnet is close and we will finally see if chainlin Network is adopted, if marketing starts, and if all those imaginary dots weren't so imaginative after all
>Does this mean mainnet is close
It's one day closer than it was yesterday.
What's your take on high reputation nodes giving wrong data?
>Who won the world cup 2014? You can check for Germany or germany, and using the mode to aggregate.
Again as the data source is FIFA ultimately and none other the people running the world cup would have to state the wrong winner, of course you could make your contract that the source was not fifa but something else but that would just be foolish unless you were trying to make some sort of bet on the media reporting it incorrectly, to give yourself assurance you could have the result checked twice at a 48 hour interval is necessary from FITA
>What's your take on high reputation nodes giving wrong data?
Why would they do that? If consensus was against them it would diminish their reputation and cause them a penalty, remember they don't know what data they are processing .
I think it will take several years for widespread adoption bit it will happen, probably with weaker competition in place but once adoption starts it will be unstoppable. I work with logistics and this is literally going to change how your orange juice gets to you, enhance efficiency automate invoice discounting and proof of delivery and effect every portion of the supply chain.
Í'm not sure which part you're arguing here. We're assuming someone uses ChainLink and not the fifa site. Most nodes will use the Fifa site, but spelling mistakes are possible, or some nodes could use other sources, or some nodes could provide false data to exploit the system somehow.
Agreed.
Bad contract bad result, good contract oood result. Contract coders are going to be high earners
>or some nodes could provide false data to exploit the system somehow.
What you are not understanding is the node does not know what it is validating so it cant do that unless it gives garbage for everything it is validating