In this start-up highlight piece, we discuss how a CMU professor and his former grad student are ushering in a new era of responsible AI, and helping companies address bias in their AI models. This is a short story of the genesis of Truera.
Synthesized has released the Community Edition of its data platform for Bias Mitigation. Released as a freemium version, the offering incorporates AI research and cutting-edge techniques to enable any organization to quickly identify potential biases within their data and immediately start to remediate these flaws.
Trustworthy, Fair, Transparent and Responsible AI is the number one priority for business leaders for very good reason. Noting the size of fines regulators are handing out for misuse of AI, the risk of going live with an AI system that nobody understands, the bias that leads to irreparable brand …
In this contributed article, Sean Beard, Vice President at Pariveda Solutions, discusses how automating trust presents a new set of challenges to an organization due to the subject nature of trust. Businesses must develop a better understanding of bias in their data and how different business contexts are applied to …
Even though there was supposedly a person in the decision-making process and a…Tags: arrests, bias, facial recognition, police
June 25, 2020, 10:20 a.m.
In crime shows, they often have this amazing tool that turns a low-resolution,…Tags: bias, face, pixels
June 25, 2020, 10:20 a.m.
Some close friends have asked if I have been analyzing the COVID-19 datasets. Yes, I have been looking at these datasets. However, my analysis has been just out of curiosity and not with the intent of publishing my forecast or recommendations.
March 29, 2020, 8:25 p.m.
Experienced machine learning professionals understand the limitations of modeling. Given what impact they can have on lives, society and economy, one has to understand their social responsibility in communicating insights.
March 29, 2020, 8:25 p.m.
Hannah Davis works with machine learning, which relies on an input dataset to…Tags: bias, Hannah Davis
March 9, 2020, 11:55 a.m.
There is No Noise — Only BiasUnderstanding why noise in machine learning is nothing but biasIn this post, we explain how bias and noise in machine learning are two sides of the same coin.God does not play dice. — Albert EinsteinEinstein famously gave this statement in reaction to the emerging …
A researcher from Queen’s University Belfast has developed an innovative new algorithm that will help make artificial intelligence (AI) fairer and less biased when processing data. Companies often use AI technologies to sift through huge amounts of data in situations such as an oversubscribed job vacancy or in policing when …
We all are familiar with experiments, we read about them in books or newspapers. Researchers/ scientists perform experiments to validate their hypothesis/ statements or to test a new product. Unlike …
Jan. 15, 2020, 10:15 p.m.
We all are familiar with experiments, we read about them in books or newspapers. Researchers/ scientists perform experiments to validate their hypothesis/ statements or to test a new product. Unlike observational studies, experiments are performed in a controlled environment so that the effect of other external factors/variables can be eliminated …
Jan. 15, 2020, 10:15 p.m.
Understanding the Bias-Variance tradeoff at three different levels: simple, intermediate and advanced.In this post, we explain the bias-variance tradeoff in machine learning at three different levels: simple, intermediate and advanced. We will follow up with some illustrative examples and discuss some practical implications in the end.If you can’t explain it …
DataRobot, the leader in enterprise AI, released new research revealing that nearly half (42%) of AI professionals across the U.S. and U.K. are “very” to “extremely” concerned about AI bias. The research -- based on a survey of more than 350 U.S. and U.K. executives involved in AI and machine …
StackOverflow’s annual developer survey concluded earlier this year, and they have graciously published the (anonymized) 2019 results for analysis. They’re a rich view into the experience of software developers around the world — what’s their favorite editor? how many years of experience? tabs or spaces? and crucially, salary. Software engineers’ …
Last week, Paco Nathan referenced Julia Angwin’s recent Strata keynote that covered algorithmic bias. This Domino Data Science Field Note dives a bit deeper into some of the publicly available research regarding algorithmic accountability and forgiveness, specifically around a proprietary black box model used to predict the risk of recidivism, …
Paco Nathan‘s column covers themes of data science for accountability, reinforcement learning challenges assumptions, as well as surprises within AI and Economics. Introduction Welcome back to our new monthly series! September has been the busiest part of “Conference Season” with excellent new material to review. Three themes jump out recently. …