Search results:
Found 10
Listing 1  10 of 10 
Sort by

Choose an application
This book demonstrates how nonlinear/nonGaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean.The book describes particlefilter based numerical calculation of the aircraft flightpath probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.
Choose an application
This book presents a multidisciplinary perspective on chance, with contributions from distinguished researchers in the areas of biology, cognitive neuroscience, economics, genetics, general history, law, linguistics, logic, mathematical physics, statistics, theology and philosophy. The individual chapters are bound together by a general introduction followed by an opening chapter that surveys 2500 years of linguistic, philosophical, and scientific reflections on chance, coincidence, fortune, randomness, luck and related concepts.A main conclusion that can be drawn is that, even after all this time, we still cannot be sure whether chance is a truly fundamental and irreducible phenomenon, in that certain events are simply uncaused and could have been otherwise, or whether it is always simply a reflection of our ignorance. Other challenges that emerge from this book include a better understanding of the contextuality and perspectival character of chance (including its scaledependence), and the curious fact that, throughout history (including contemporary science), chance has been used both as an explanation and as a hallmark of the absence of explanation. As such, this book challenges the reader to think about chance in a new way and to come to grips with this endlessly fascinating phenomenon.
Human Genetics  Philosophy of Science  Probability Theory
Choose an application
From ABO typing during the first half of the 20th century, to the use of enzymes and protein contained in blood serums and finally direct DNA typing, biology has been serving forensic purposes for many decades. Statistics, in turn, has been constantly underpinning the discussions of the probative value of results of biological analyses, in particular when defendants could not be considered as excluded as potential sources because of different genetic traits. The marriage between genetics and statistics has never been an easy one, though, as is illustrated by fierce arguments that peaked in the socalled "DNA wars" in some American courtrooms in the mid1990s. This controversy has contributed to a lively production of research and publications on various interpretative topics, such as the collection of relevant data, foundations in population genetics as well as theoretical and practical considerations in probability and statistics. Both DNA profiling as a technique and the associated statistical considerations are now widely accepted as robust, but this does not yet guarantee or imply a neat transition to their application in court. Indeed, statistical principles applied to results of forensic DNA profiling analyses are a necessary, yet not a sufficient preliminary requirement for the contextually meaningful use of DNA in the law. Ultimately, the appropriate use of DNA in the forensic context relies on inference, i.e. reasoning reasonably in the face of uncertainty. This is all the more challenging that such thought processes need to be adopted by stakeholders from various backgrounds and holding diverse interests. Although several topics of the DNA controversy have been settled over time, some others are still debated (such as the question of how to deal with the probability of error), while yet others  purportedly settled topics  saw some recent revivals (e.g., the question of how to deal with database searches). In addition, new challenging topics have emerged over the last decade, such as the analysis and interpretation of traces containing only low quantities of DNA where artefacts of varying nature may affect results. Both technical and interpretative research involving statistics thus represent areas where ongoing research is necessary, and where scholars from the natural sciences and the law should collaborate. The articles in this Research Topic thus aim to investigate, from an interdisciplinary perspective, the current understanding of the strengths and limitations of DNA profiling results in legal applications. This Research Topic accepts contributions in all frontiers article type categories and places an emphasis on topics with a multidisciplinary perspective that explore (while not being limited to) statistical genetics for forensic scientists, case studies and reports, evaluation and interpretation of forensic findings, communication of expert findings to laypersons, quantitative legal reasoning and factfinding using probability.
Forensic DNA profiling  interpretation  Statistics and the law  probability theory  Commercialization  DNA transfer  Lowtemplate DNA analysis  forensic molecular biology  Bacterial DNA
Choose an application
This book presents 20 peerreviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in:• Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk.• Pricing and hedging in fixedincome markets and multicurve interestrate modeling.• Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations.The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities.A panel discussion included in the book (featuring Damiano Brigo, Christian Fries, John Hull, and Daniel Sommer) on the foundations of modeling and pricing in the presence of counterparty credit risk provides intriguing insights on the debate.
Choose an application
Ontologies and semantic metadata can theoretically solve all problems of traditional fulltext search engines. In practice, however, they are always imperfect. This work analyzed whether the negative effect of ontology imperfection is higher than the positive effect of exploiting the ontology features for IR. To answer this question, a complete ontologybased information retrieval system was implemented and thoroughly evaluated.
Choose an application
This open access book presents the key aspects of statistics in Wasserstein spaces, i.e. statistics in the space of probability measures when endowed with the geometry of optimal transportation. Further to reviewing stateoftheart aspects, it also provides an accessible introduction to the fundamentals of this current topic, as well as an overview that will serve as an invitation and catalyst for further research. Statistics in Wasserstein spaces represents an emerging topic in mathematical statistics, situated at the interface between functional data analysis (where the data are functions, thus lying in infinite dimensional Hilbert space) and nonEuclidean statistics (where the data satisfy nonlinear constraints, thus lying on nonEuclidean manifolds). The Wasserstein space provides the natural mathematical formalism to describe data collections that are best modeled as random measures on Euclidean space (e.g. images and point processes). Such random measures carry the infinite dimensional traits of functional data, but are intrinsically nonlinear due to positivity and integrability restrictions. Indeed, their dominating statistical variation arises through random deformations of an underlying template, a theme that is pursued in depth in this monograph. ; Gives a succinct introduction to necessary mathematical background, focusing on the results useful for statistics from an otherwise vast mathematical literature. Presents an up to date overview of the state of the art, including some original results, and discusses open problems. Suitable for selfstudy or to be used as a graduate level course text. Open access.
Probability Theory and Stochastic Processes  Optimal Transportation  MongeKantorovich Problem  Barycenter  Multimarginal Transport  Functional Data Analysis  Point Processes  Random Measures  Manifold Statistics  Open Access  Geometrical statistics  Wasserstein metric  Fréchet mean  Procrustes analysis  Phase variation  Gradient descent  Probability & statistics  Stochastics
Choose an application
A concise and selfcontained introduction to causal inference, increasingly important in data science and machine learning.The mathematization of causality is a relatively recent development, and has become increasingly important in data science and machine learning. This book offers a selfcontained and concise introduction to causal models and how to learn them from data. After explaining the need for causal models and discussing some of the principles underlying causal inference, the book teaches readers how to use causal models: how to compute intervention distributions, how to infer causal models from observational and interventional data, and how causal ideas could be exploited for classical machine learning problems. All of these topics are discussed first in terms of two variables and then in the more general multivariate case. The bivariate case turns out to be a particularly hard problem for causal learning because there are no conditional independences as used by classical methods for solving multivariate cases. The authors consider analyzing statistical asymmetries between cause and effect to be highly instructive, and they report on their decade of intensive research into this problem. The book is accessible to readers with a background in machine learning or statistics, and can be used in graduate courses or as a reference for researchers. The text includes code snippets that can be copied and pasted, exercises, and an appendix with a summary of the most important technical concepts.
Causality  machine learning  statistical models  probability theory  statistics  assumptions  causeeffect models  interventions  counterfactuals  SCMs  causeeffect models  identifiability  semisupervised learning  covariate shift  multivariate causal models  markov  faithfulness  causal minimality  docalculus  falsifiability  potential outcomes  algorithmic independence  halfsibling regression  episodic reinforcement learning  domain adaptation  simpson's paradox  conditional independence  computer science
Choose an application
This open access book explores machine learning and its impact on how we make sense of the world. It does so by bringing together two ‘revolutions’ in a surprising analogy: the revolution of machine learning, which has placed computing on the path to artificial intelligence, and the revolution in thinking about the law that was spurred by Oliver Wendell Holmes Jr in the last two decades of the 19th century. Holmes reconceived law as prophecy based on experience, prefiguring the buzzwords of the machine learning age—prediction based on datasets. On the path to AI introduces readers to the key concepts of machine learning, discusses the potential applications and limitations of predictions generated by machines using data, and informs current debates amongst scholars, lawyers and policy makers on how it should be used and regulated wisely. Technologists will also find useful lessons learned from the last 120 years of legal grappling with accountability, explainability, and biased data.
Science and Technology Studies  Human Geography  IT Law, Media Law, Intellectual Property  Artificial Intelligence  AI  Machine learning  artificial intelligence  'big data'  probability theory  history of ideas  legal interpretation  Transhumanism  Futurism  Oliver Wendell Holmes Jr  Wendell Holmes Jr.  legal philosophy  machine bias  algorithmic bias  Open Access  Sociology  Human geography  Entertainment & media law  Artificial intelligence
Choose an application
This Proceedings book presents papers from the 39th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, MaxEnt 2019. The workshop took place at the Max Planck Institute for Plasma Physics in Garching near Munich, Germany, from 30 June to 5 July 2019, and invited contributions on all aspects of probabilistic inference, including novel techniques, applications, and work that sheds new light on the foundations of inference. Addressed are inverse and uncertainty quantification (UQ) and problems arising from a large variety of applications, such as earth science, astrophysics, material and plasma science, imaging in geophysics and medicine, nondestructive testing, density estimation, remote sensing, Gaussian process (GP) regression, optimal experimental design, data assimilation, and data mining.
hypothesis tests  precise hypotheses  pragmatic hypotheses  UAP  UAV  UFO  Nimitz  TicTac  image reconstruction  Bayesian Maximum a Posteriori approach  entropy prior probability  global statistical regularization  local statistical regularization  PET  SPECT  marginal likelihood  evidence  nested sampling  annealed importance sampling  Monte Carlo  stochastic gradients  SGHMC  model comparison  MCMC  thermodynamic Integration  HMC  uncertainty quantification  nonintrusive  spectral expansion  plasmawall interactions  Bayesian analysis  Deep Learning (DL)  Artificial Intelligence (AI)  Convolutional Neural Network (CNN)  classification  orthodontics  cervical vertebra maturation  machine learning  uncertainty quantification  multi fidelity  Gaussian processes  probability theory  Bayes  impedance cardiography  aortic dissection  Gaussian process regression  physicsinformed methods  kernel methods  field reconstruction  source localization  partial differential equations  meshless methods  nested sampling  cluster analysis  mean shift method  Bayesian evidence  model comparison  formant  steadystate  vowel  detrending  acoustic phonetics  sourcefilter theory  probability theory  uncertainty quantification  model averaging  nested sampling
Choose an application
Quantum information has dramatically changed information science and technology, looking at the quantum nature of the information carrier as a resource for building new information protocols, designing radically new communication and computation algorithms, and ultrasensitive measurements in metrology, with a wealth of applications. From a fundamental perspective, this new discipline has led us to regard quantum theory itself as a special theory of information, and has opened routes for exploring solutions to the tension with general relativity, based, for example, on the holographic principle, on noncausal variations of the theory, or else on the powerful algorithm of the quantum cellular automaton, which has revealed new routes for exploring quantum fields theory, both as a new microscopic mechanism on the fundamental side, and as a tool for efficient physical quantum simulations for practical purposes. In this golden age of foundations, an astonishing number of new ideas, frameworks, and results, spawned by the quantum information theory experience, have revolutionized the way we think about the subject, with a new research community emerging worldwide, including scientists from computer science and mathematics.
reconstruction of quantum theory  entanglement  monogamy  quantum nonlocality  conserved informational charges  limited information  complementarity  characterization of unitary group and state spaces  algebraic quantum theory  C*algebra  gelfand duality  classical context  bohrification  process theory  classical limit  purity  higherorder interference  generalised probabilistic theories  Euclidean Jordan algebras  Pauli exclusion principle  quantum foundations  Xray spectroscopy  underground experiment  silicon drift detector  measurement uncertainty relations  relative entropy  position  momentum  quantum mechanics  the measurement problem  collapse models  Xrays  quantum gravity  discrete spacetime  causal sets  path summation  entropic gravity  physical computing models  complexity classes  causality  blind source separation (BSS)  qubit pair  exchange coupling  entangled pure state  unentanglement criterion  probabilities in quantum measurements  independence of random quantum sources  iterant  Clifford algebra  matrix algebra  braid group  Fermion  Dirac equation  quantum information  quantum computation  semiclassical physics  quantum control  quantum genetic algorithm  samplingbased learning control (SLC)  quantum foundations  relativity  quantum gravity  cluster states  multipartite entanglement  percolation  Shannon information  quantum information  quantum measurements  consistent histories  incompatible frameworks  single framework rule  probability theory  entropy  quantum relative entropy  quantum information  quantum mechanics  inference  quantum measurement  quantum estimation  macroscopic quantum measurement  quantum annealing  adiabatic quantum computing  hard problems  Hadamard matrix  binary optimization  reconstruction of quantum mechanics  conjugate systems  Jordan algebras  quantum correlations  Gaussian states  Gaussian unitary operations  continuousvariable systems  Wignerfriend experiment  nogo theorem  quantum foundations  interpretations of quantum mechanics  subsystem  agent  conservation of information  purification  group representations  commuting subalgebras  quantum walks  Hubbard model  Thirring model  quantum information  quantum foundations  quantum theory and gravity
Listing 1  10 of 10 
Sort by
