tag:beta.briefideas.org,2005:/all?page=25Journal of Brief Ideas: Ideas from the last week2017-01-23T06:22:05Ztag:beta.briefideas.org,2005:Idea/3592017-01-23T06:22:05Z2017-01-25T06:00:50Zexchange of value between USB devices, bitcoinhttps://doi.org/10.5281/zenodo.259516BOUNDER ##design0.1##
One way 2 secure USB devices UTXO exchange /hardware/
Introduction.
While bitcoin and other cryptocurrencies are fairly successful they suffer from the weakness that they need the world wide web or internet to perform any transactions. The algorithm design I propose is of-chain value exchange between peers in a chain like matter. This design would include one Atmel crypto chip and rewrite 2 USB drivers 'sector1' and 'sector2'.
Configuration on USB drive.
The problem of exchanging value stored offline facto, any one can create a set of UTXO
and claim that they are unspent. One way to solve this is that one exchange validated your set of UTXO. The set of UTXO would be per Satoshi and a Secret pin is given to the owner of the USB device. The design of USB driver Sector 1 asks for validation of every Satoshi spent '' pin A ''. Driver sector 2 could ask for a key unique to this USB drive validating the exchange is actually yours. If the exchange validated the set of UTXO with the embedded unique key of the USB device in a signature added to the UTXO set and written that value to the USB device. Adding 'pin A' to the UTXO set confirms this is yours to spend. So Imagine a change of value signed bye a set of signatures happening of line and in a secure way.
Algorithm design device X <=> Y
sector 1 USB Driver1
boot USB 1 and generate key X1
boot USB 2 and generate key Y1
'PIN A' creating multi signature XY1
.lib1
multi signature is divided and control bits are added
('loop back sector 1' ' add sequence signature sector 2 and first checkpoint bit is timed)
https://atmelcorporation.wordpress.com/2016/02/22/atmel-launches-the-industrys-first-hardware-interface-library-for-tls-stacks-used-in-iot-edge-node-apps/
sector 2 USB driver2
USB 1 generate key X2
USB 2 generate key Y2
check ' unique key any USB device ' validating UTXO set
use multi signature 2
checkpoint value Z time bit 'Atmel time checkpoint' and all is validated.
sequence Z wrong bit init. key generation and peers add value to the exchange.
At this point one could loop the entire process and validate the information / value exchanged.
The value of this algorithm lays with the loop sequences of the circuit where time ''time to hop'' is the MFA, and peers committed to the exchange of data.
If all checks out the data is exchanged by driver sector1 and driver sector2.
Hardware.
I imagine this hardware device is Solar powered with a battery able to produce 900mA 5V with led screen and 3 push buttons would suffice for entering any value.
###example USB bounder in pseudo-code
1#sector1 USB driver1 {
start usb1;
check validating 'Pin A' with UTXO set;
if validating entire set = true
sign multi signatureAX1;
}
2#sector1 USB driver2 {
start usb2;
check validating 'Pin A'(ore pin usb2) with utxo set;
if validating entire set = true
sign multi signatureAY1;
}
3#.lib1 {
generate AXAY1;
confirm looptime driver 1 driver 2 with added AX1 and AY2 (count bits = time);
add crypto with Atmel chip ''blinded(unlock key = counted bit string/amount and sequence of bits)''
add lib1 + blinded key to the cirquit;
blind unique signing key Z;
}
1/2/3 need one active loop with time/bit check against atmel chip crypto (otherwise one could just replace the usb device)
4#sector2 usb driver2 {
confirm Unique key from AX1 with .lib1 clone;
confirm Unique key from AY1 with .lib1 clone;
input value with push button;
signing AXAY1 and remove UTXO from .lib1 sequence;
sign new loop with atmel chip blinded crypto key;
}
5#validating string {
.lib1 is valid;
4sector2 usb driver2 is valid;
extracted value from clone = sector1 USB driver1 + sector1 USB driver2;
bitcount did change in the expected value;
atmel blinded key came from onbord chip;
release UTXO USB1 to USB2
}
6#MFA {
all was preformed within the expected bitcount;
if .lib1 + 4 + 5 = true;
write validating 'Pin A' receiver on exchanged UTXO set;
}
JohnnSebastian@protonmail.com
bitcoin wallet : 1261fKX1aEcEF7dgKyvkCgPwvcHNASwmnB
Sebastian, Johntag:beta.briefideas.org,2005:Idea/3572016-12-14T20:05:29Z2016-12-15T06:00:59ZReproducible Research with End-to-end Machine Inference Using Deep Learning and Bayesian Statisticshttps://doi.org/10.5281/zenodo.203086The conventional statistical inference based on hypothesis testing and *p*-value is fundamentally flawed. The general practice of data analysis involves too many post hoc decisions makings based on *p*-value, which unavoidably violates the assumptions of frequentist statistics, and worse, leaks to [*p*-hacking](http://pss.sagepub.com/lookup/doi/10.1177/0956797611417632) and ["garden-of-forking-paths"](http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf). It is especially true for data that requires multiple preprocessing steps. For example, it had been observed that [virtually no two fMRI papers contain identical analysis pipeline](http://www.sciencedirect.com/science/article/pii/S1053811912007057). Indeed, researchers often facing many arbitrary choices such as what kind of smoothing should be used, which algorithm and what parameter set should be applied as motion correction, etc.
With the advance in deep learning and computational Bayesian approach, it is easier than ever to fit (very) large model and apply the [Box's loop](http://www.annualreviews.org/eprint/7xbyci3nwAg5kEttvvjk/full/10.1146/annurev-statistics-022513-115657) to perform model criticism and inference. It is thus possible to apply an end-to-end inference framework, package the whole data analysis and statistical inference workflow into a [pipeline](http://scikit-learn.org/stable/modules/cross_validation.html). It allows researchers to quantitatively evaluate the human part in data analysis (e.g., different parameter setting), by directly modelling these arbitrary choices as hyperparameters in the model using Bayesian inferences. It is also possible to develop many pipelines with different models and parameters, then evaluated these pipelines using [cross-validation and prediction](http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html), etc. Combined with open data policy, this framework can greatly improve research reproducibility. Lao, Junpengtag:beta.briefideas.org,2005:Idea/3542016-12-08T11:48:03Z2016-12-10T02:41:10ZUnifying generative models and exact likelihood-free inference with conditional bijectionshttps://doi.org/10.5281/zenodo.198541Recent work in density estimation uses a bijection $f : X \to Z$ (e.g. an invertible flow or autoregressive model) and a tractable density $p(z)$ (e.g. [[1]](https://arxiv.org/abs/1410.8516) [[2]](http://www.dmi.usherb.ca/~larocheh/projects_nade.html) [[3]](https://arxiv.org/abs/1410.6460) [[4]](https://arxiv.org/abs/1505.05770)).
\begin{equation}
p(x) = p(f_\phi(x)) \left| \det\left ( \frac{\partial f_\phi(x)}{\partial x_T} \right) \right | \;,
\end{equation}
where $\phi$ are the internal network parameters for the bijection $f_\phi$. Learning proceeds via gradient ascent $\nabla_\phi \sum_i \log p(x_i)$ with data $x_i$ (i.e. maximum likelihood wrt. the internal parameters $\phi$). Since $f$ is invertible, then this model can also be used as a generative model for $X$.
This can be generalized to the conditional density $p(x|\theta)$ by utilizing a family of bijections $f_{\theta} : X \to Z$ parametrized by $\theta$ (e.g. [[5]](https://arxiv.org/pdf/1609.03499.pdf) [[6]](https://arxiv.org/abs/1611.05209)).
\begin{equation}
p(x|\theta) = p(f_{\phi; \theta}(x)) \left| \det \left ( \frac{\partial f_{\phi; \theta}(x)}{\partial x_T} \right) \right |
\end{equation}
Here $\theta$ and $x$ are input to the network (and its inverse) and $\phi$ are internal network parameters. Again, learning proceeds via gradient ascent $\nabla_\phi \sum_i \log p(x_i|\theta_i)$ with data $x_i,\theta_i$.
We observe that not only can this model be used as a conditional generative model $p(x|\theta)$, but it can also be used to perform asymptotically exact, amortized likelihood-free inference on $\theta$.
This is particularly interesting when $\theta$ is identified with the parameters of an intractable, non-differentiable computer simulation or the conditions of some real world data collection process.Cranmer, KyleLouppe, Gillestag:beta.briefideas.org,2005:Idea/3532016-11-30T10:00:23Z2019-04-29T09:57:33ZA sustainable URL shortener for sciencehttps://doi.org/10.5281/zenodo.199064Emerging infrastructures for scientific research such as [Zenodo](https://zenodo.org/) and [ORCiD](http://orcid.org/) offer persistent identification of data, software, publications, people, and institutions. Publications, however, often contain links to URLs on the internet, e.g., to project or software homepages, or specific query results on search or visualisation platforms. E.g., the [ANNIS](http://corpus-tools.org/annis/) search infrastructure for linguistic multi-layer corpora [lets the user create persistent links](http://corpus-tools.org/annis/documentation.html) to actual search results. However, as instances of platforms or whole services die or move, and homepages change, such links are prone to break. This makes it harder to verify the results presented in scientific publications. A possible solution would be to use a facade for links, e.g., a [URL shortener](https://en.wikipedia.org/wiki/URL_shortening), which allows the user to change the target of the shortened URL. Some [research institutions already provide URL shorteners](https://hu.berlin/), but usually only for links within their domain, e.g., `hu-berlin.de` for the [Humboldt-Universität zu Berlin, Germany](https://www.hu-berlin.de). If resources are moved to another institution, the shortened URL will break and is thus not suitable for publication. A sustainable, institution-agnostic URL shortener for the scientific domain would allow - e.g., via [Shibboleth](https://en.wikipedia.org/wiki/Shibboleth_(Internet2)) - to create short URLs for publication and change target links after their creation. Targets could possibly be restricted to domains of institutions within the Shibboleth space, as well as manually registered institutions.Druskat, Stephantag:beta.briefideas.org,2005:Idea/3502016-11-25T13:22:30Z2021-11-15T20:01:02ZMake computer-aided research more verifiablehttps://doi.org/10.5281/zenodo.171845A central problem with modern computer-aided research is that research based on long computations becomes difficult to verify. It is thus unclear how and why such should be trusted. The most widely discussed aspect of this problem is the (non-)reproducibility of computational work, but the issue goes deeper: even 100% reproducible work can be wrong because of a software bug, an implicit assumption in the code that is not documented, etc.
One measure to improve the situation is to require authors who use computers in their research to state in a short section of each paper (1) what they have done to validate their methods and their software and (2) how readers can attempt an independent verification of the work. Reviewers would then check if those statements match or exceed the current state of the art. The goal is to bring the discussion about the problem out in the open, in a way that nobody can simply ignore it. The measure acts as both carrot and stick: good verifiability approaches can be showcased, and insufficient ones can be recognized more easily.
Hinsen, Konradtag:beta.briefideas.org,2005:Idea/3492016-11-21T06:06:01Z2016-11-21T11:38:15ZPlaying Games with Ideas: when epistemology pays offhttps://doi.org/10.5281/zenodo.167647In standard game theory analysis, gameplay proceeds using rational strategies. While there is no explicit link between rationality and strategy selection, playing to equilibrium solutions often depends on choosing the optimal strategy. Yet what if strategy optimality is not obvious to the players? Often, people make seemingly irrational decisions based on beliefs or situational interpretations of the empirical world. These beliefs have high social import, but with little advantage compared to empirical reality.
We use a decision kernel to model how beliefs (an epistemological network) influence a rational strategy suite (top box in attachment). The feedforward network output is a set of beliefs that influence gameplay as a set of rational behavior (empirical strategy suite). This results in an epistemological-influenced social capital payoff function, and an empirical payoff function (bottom box in attachment). The joint distribution (or area of intersection between the payoff functions) is the epistemological payoff. This epistemological payoff can be small (low conditional social payoff) or large (high conditional social payoff). When small, players play their strategies close to the optimal expectation. Larger epistemological payoffs result plays that have with relatively higher social utility, but also little immediate utility when compared to the empirical world.Alicea, Bradlytag:beta.briefideas.org,2005:Idea/3472016-10-30T11:31:09Z2016-11-22T06:00:29ZApplying Supervised Machine Learning to the Fight against Online Child Sexual Abusehttps://doi.org/10.5281/zenodo.167646Some reports of NGOs and anecdotal evidence suggest that child abuse materials (CAMs) share some characteristics extensively. They mostly take place indoor settings, victims' face or genitalia is visible and there are few visual clues about the abuser(s).
For the known CAMs, there are methods and projects to detect and remove them automatically such as PhotoDNA of Microsoft Inc. and Baseline project of Interpol. However, detection of new CAMs still heavily relies on outdated and inefficient methods such as user reporting of ordinary people.
By coding specific attributes of every known CAM such as indoor/outdoor setting, on bed/couch/table, and visibility of face/breast/genitalia throughly, supervised machine learning can be used in the detection of new CAMs. As the number of coded known CAMs increases, the algorithm would be better at determining the features of known CAMs due to mentioned similarities of them. Since the databases of Interpol and NCMEC have millions of known CAMs, intervention of supervisors would decrease. Naturally, the algorithm also might be helpful to detect new CAMs to some extent after the inclusion of all known CAMs. Besides faster victim identifications, this would also decrease the spreading rate of new CAMs throughout the internet remarkably if it is successful.açar, kemal velitag:beta.briefideas.org,2005:Idea/3442016-10-26T10:34:01Z2017-02-26T14:43:00ZAnthropogenic increase in the length of a dayhttps://doi.org/10.5281/zenodo.163757The length of a day might have been increasing owing to decreased speed of the earth's rotation triggered by anthropogenic factors: 1) anthropogenic induced increase in the moment of inertia of the earth; and 2) slowdown of tidal currents due to impediment by artificial structures. The 1st factor would arise if humans move natural resources (e.g., metals, oil) from under to above the earth's surface, which results in increased distances between these resources and the center of the earth, namely, the moment of inertia. The 2nd factor would arise if humans weaken the tidal currents by building structures in the sea (e.g. dykes) and by performing tidal power generation. These possibilities can be tested by examining how humans have been utilizing underground natural resources and effects of manipulations of the flow in the sea. In addition, it may be useful to monitor the precise length of a day in future.KURIHARA, Takeotag:beta.briefideas.org,2005:Idea/3432016-10-26T06:35:18Z2016-10-29T06:00:25ZSearching for Dyson spheres with Gaiahttps://doi.org/10.5281/zenodo.163758Assuming that Dyson spheres obscure optical/near-infrared light as grey absorbers, a partial Dyson sphere with high covering fraction (fcov > 0.5) could reveal itself as an anomalously subluminous star with a parallax-based distance smaller than the spectroscopically inferred one. A large catalog search for objects of this type can currently be carried out by combining parallax distances from [Gaia DR1](https://arxiv.org/abs/1609.04303) with spectroscopic distances from [RAVE DR5](https://arxiv.org/abs/1609.03210). From the best-fitting stellar parameters provided by RAVE, the [PARSEC](http://stev.oapd.inaf.it/cgi-bin/cmd) isochrones on which the RAVE fits are based also provide the predicted [WISE](http://wise2.ipac.caltech.edu/docs/release/allsky/) fluxes for these stars. A comparison to the observed WISE flux may then reveal any infrared excess associated with waste heat emission from the Dyson sphere at wavelengths < 22 micron. Most of the ~200,000 stars which are common to both data sets have distance errors too large to be useful for this analysis, but searches among subsets of objects with the highest-quality data and fits could be fruitful. Further cuts can be made to exclude objects which are very young (potentially obscured by debris disks), have left the main sequence (potentially subject to circumstellar dust), or for which the stellar parameters favoured by the RAVE and [RAVE-on](https://arxiv.org/abs/1609.02914) analyses are inconsistent.Zackrisson, ErikWehrhahn, AnsgarKorn, Andreastag:beta.briefideas.org,2005:Idea/3422016-10-24T11:12:09Z2016-10-25T06:00:38ZThe simple observation by Huygens that disproved Newton's rule for the extraordinary refraction of calcitehttps://doi.org/10.5281/zenodo.162990Huygens (1629–1695), in his <a href="http://www.gutenberg.org/files/14725/14725-h/14725-h.htm"><i>Treatise on Light</i></a> (1690), explained the double refraction of calcite on the correct hypothesis that the secondary waves for the “extraordinary” refraction are <em>spheroidal</em> rather than spherical. His theory explained, <i>inter alia</i>, his simple observation that when a calcite crystal is placed on a horizontal page of text, the ordinary image appears higher than the extraordinary image (Ch.V, <a href="http://www.gutenberg.org/files/14725/14725-h/14725-h.htm#Page_81">Art. 39)</a>.
Newton, in Qu. 17 of the Latin edition of his <i>Opticks</i> (1706), or Qu. 25 (in <a href="http://www.newtonproject.sussex.ac.uk/view/texts/diplomatic/NATP00051">Bk. 3</a>) of later English editions, gave his own ‘rule’ for double refraction. Let a ray from an external point <i>L</i> strike one face of a calcite crystal at point <i>P</i>, and be refracted to point <i>O</i> on the opposite face by the <i>O</i>rdinary refraction, and to point <i>E</i> on the opposite face by the <i>E</i>xtraordinary refraction. According to Newton's rule, the displacement vector <i>OE</i> does not vary with the direction of the incident ray.
It is <a href="http://www.grputland.com/2016/10/observation-by-huygens-that-should-have-discredited-newtons-rule.html">easily shown</a> that if this rule were true, the two text images reported by Huygens would have been at the same height.
Newton's rule was disproven in 1788. That it had already been disproven by a single qualitative observation, before it was published, seems to have escaped notice.Putland, Gavin