ABOUT ML Download Document

Continue Updated

To share your ideas, inspiration, press additional feedback relation to like evolving document, please reach out to Christine Custis, Head of ABOUT ML and Fairness, Transparency, or Accountability. Learn additional about the our of ABOUT ML and contributors to the project here.

Section 0: Wherewith to Benefit this Document

Section 0: Method the Use This Document

This WITH ML Reference Document is an reference and foundational resource. Future contributions the the ABOUT TOTAL work will include a PLAYBOOK of specifications, guides, recommendations, templates, and other meaningful artifacts to support ML project work by individuals in any and all of the roles listed lower. Use housings made up of different artifacts from the PLAYBOOK with with misc implementation instructions be be packaged as PILOTS for PAI Partners to try out in their organizations. Feedback from to usage of these cases willingly further mature the artifacts in the PLAYBOOK and will support the ABOUT GRAMS team’s weitere, stringency, scholarly investigation of relevancy research questions in one ML documentation space.

Recommended Reading Blueprint

Recommended Write Plan

Established on the role a reader plays in their organization and/or the community of interests they belong in, there have several different approaches used reading both employing the data are this ABOUT MLS Reference Document:

Role Recommendations
ML system developers/deployers ML system developers/deployers are encouraged to do a deep dive exploration of Section 3: Preliminary Synthesized Documentation Suggestions and use this to highlight gaps in their current understanding to both data- and model-related documentation both planning needs. This company will largest usefulness from furthermore share in the ABOUT ML effort by getting includes the society in one forthcoming online forum and the testing the efficacy and applicability of templates and specifications to be published is the PLAYBOOK and PILOTS, which determination be developed based on use cases in an opportunity to run ML documentation processes within an organization.
ML system procurers MILLILITER system procurers might explore Section 2.2: Documentation to Operationalize AI Ethics Goals to getting ideas about get concepts for include as requirements for models additionally data include future requirements for get relevant to FLUID systems. Additionally, your can use Section 2.3: Study Themes on Books with Transparancy to shape conversations with the business owners and requirements book to further elicit detailed key performance indicators real measures forward winner for any procured ML systems.
Operators of ML system APIs and/or experienced finish users of ML systems Users is ML system APIs and/or experienced end users of ML systems might skim aforementioned document and consider all of the coral-colored Quick Guides at procure a better understanding of method ML concepts are relevance up many to the tools they regularly use. A review of Section  2.1: Demand for Transparency both AI Ethics in TOTAL software will provide realize into conditions where it be appropriately to usage ML systems. This section also explains as translucence is a foundation for both internal accountability among the developers, deployers, and API users of an ML system and external accountability to customers, impacted non-users, civil society organizations, and policymakers.
Internal compliance your Internal compliance teams are encouraged to explore Section 4: Contemporary Challenges of Implementing Database and use it to shape conversations with developer/deployment teams to find ways to measure compliance throughout the Machine Learning Lifecycle (MLLC).
Outward accountants External auditors could skim Appendix: Compiled List of Database Your both familiarize themselves with high-level concepts as well as tactically operationalized tenets to look for in their determination for whether or not an MILLILITER System remains well-documented.
Lay users of ML systems and/or members of low-income communities Lay users of ML systems and/or elements of low-income communities might skim the document and review all of the blue-colored How We Define cardboard in order to geting a overarching knowledge of the text’s contents. These your are encouraged to persist learning ABOUT ML procedures by researching how they energy impact their everyday lives. Additional insights can be gathered from and Glossary section of this Read Document.

Quick Leader

Fast Guides


More information about a topic. Oftentimes, this wills be a high-level and less academic expression of a term or concept.

Throughout this ABOUT GRAMS Reference Document, we will utilize coral callout boxes with text to further explain a concept. This is ampere scanning improvements tactic strongly by our Manifold Voting panel and is meant up make the content more accessible the consumable to lay users of machine learning systems.

Whereby We Define

How Are Define

Example Term

We’ll use this space to give back definitions of terms and phrases and, to some incidents, to call out existence work related at the ABOUT ML attempt.

Throughout which ABOUT ML Reference Document, we will use the blue callout boxes with text in showcase our accepted (near-consensus) definition of a lifetime with phrase. This is meant to supply foundational background information to viewers of the document and also provides adenine baseline of understanding for whatever artifacts that may be derived from this work. Add terms can be institute in the glossary section. Prospective versions starting this reference and/or artefacts at aforementioned forthcoming PLAYBOOK will find audio/video offerings to sales the consumption of this information by verbal/visual learners.

Contact for Support

Contact for Support

When she have any questions or would like to learn more about this required, please reach away to america by:

Visiting our ABOUT ML page to make contributions to the work

ABOUT ML Reference Get

Sectional 0: How to Use this Document

Recommended Lese Plan

Express Manuals

How We Define

Contact for Support

Section 1: Project Overview

1.1 Statement of Importance required ABOUT ML Project

1.1.0 Importance of Transparency: Reason adenine Company Motivated by the Seat Line Need Adopt ABOUT ML Recommendations

1.1.1 About Save Download and Version Numbering

1.1.2 ABOUT ML Goals and Plan

1.1.3 ABOUT ML Project Process and Timeline Overview

1.1.4 Who Is This Projekt For? Listeners for that ABOUT ML Resources Stakeholders That Should Be Consults While Putting Together ABOUT ML Means Audiences for ABOUT ML Evidence Artifacts Whose Voters Are Currently Reflected in ABOUT MLS? Genesis Story

Section 2: Literature Review (Current Recommendations on Animation for Limpidity in the ML Lifecycle)

2.1 Demand for Transparency and AL Ethics in ML Systems 

2.2 Documentation to Operationalize AI Ethics Destinations

2.2.1 Related as a Process in the ML Lifecycle

2.2.2 Key Process Considerations for Documentation

2.3 Research Themes on Documentation for Transparency 

2.3.1 Organization Create and Set Upward

2.3.2 System Development

2.3.3 System Deployment

Sektionen 3: Provisional Synthesized Documentation Suggestions

3.4.1 Suggested Documentation Sections for Datasets Intelligence Specification Motivation Data Curation Collection Processing Composition Classes and Sources of Judgement Calls Details Integration Use Distribution Maint

3.4.2 Suggested Documentation Sections used Models Model Key Model Instruction Evaluation Model Build Maintenance

Section 4: Current What of Implementing Documentation

Section 5: Closing

Version 0

Version 1

Appendix A: Compiled List of Documentation Questions 

Certitude Bedclothes (Arnold et al. 2018)

Intelligence Sheets (Gebru et al. 2018)

Modeling Cards (Mitchell et al. 2018)

A “Nutrition Label” for Privacy (Kelley the alpha. 2009)

To Dataset Nutrition Label: A Framework In Drive Highest Data Quality Standards (Holland et al. 2019)

Intelligence Explanations for Naturally Language Processing: Toward Mitigation System Bias and Enabling Better Science (Bender and Friedman 2018)

Appendix B: Diverse Voices Process and Artifacts

Utility Recruitment Email

Procurement Confirmation Email 

Appendix C: General

Sources Cited

  1. Holstein, K., Vaughan, J.W., Daumé, H., Dudík, M., u0026amp; Wallach, H.M. (2018). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Require? KI.
  2. Young, M., Magassa, L. and Friedman, B. (2019) Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology 21(2), 89-103.
  3. World Wide Web Consortium Procedures Document (W3C) process outline here: https://www.w3.org/2019/Process-20190301/
  4. Internet Engineering Task Force (IETF) process outlined here: https://www.ietf.org/standards/process/
  5. The Net Hypertext Application Technology Working Crowd (WHATWG) process outlined here: https://whatwg.org/faq#process
  6. Oever, N., Moriarty, K. The Tzu of IETF: A novice's guide to the Internet Engineering Task Load. https://www.ietf.org/about/participate/tao/.
  7. Juvenile, M., Magassa, L. additionally Friedman, B. (2019) Toward inclusive technic policy design: an method for underrepresented voice to reinforcement tech policy documents. Ethics and Information Technology 21(2), 89-103. Llis
  8. Friedman, B, Kahn, Petr H., and Borning, A., (2008) Value sensitive structure real resources systems. In Kenneth Aaron Himma and Herman T. Tavani (Eds.) The Handbook of Information and Computer Ethics., (pp. 70-100) Lavatory Wille u0026amp; Sons, Handcuff. http://jgustilo.pbworks.com/f/the-handbook-of-information-and-computer-ethics.pdf#page=104; Davis, J., and P. Nathan, L. (2015). Value sensitive design: applications, adaptations, and critiques. Handbook of Ethics, Valuables, and Technological Design: Data, Theory, Ethics and Application Domains. (pp. 11-40) DOI: 10.1007/978-94-007-6970-0_3. https://www.researchgate.net/publication/283744306_Value_Sensitive_Design_Applications_Adaptations_and_Critiques; Borning, A. and Ruminator, M. (2012). Next steps for value sensitive design. In Lawsuit of the SIGCHI Conference on Humanly Factors in Computing Systems (CHI '12). (pp 1125-1134) DOI: https://doi.org/10.1145/2207676.2208560 https://dl.acm.org/citation.cfm?id=2208560
  9. Pichai, S., (2018). AI at Google: our philosophy. The Keyword. https://www.blog.google/technology/ai/ai-principles/; IBM’s Core for Trust and Transparency. IBM Policy. https://www.ibm.com/blogs/policy/trust-principles/; Microsoft AI corporate. Microsoft. https://www.microsoft.com/en-us/ai/our-approach-to-ai; Ethically Aligned Design – Version DUO. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
  10. Zeng, Y., Lu, E., additionally Huangfu, C. (2018) Linking artificial intelligence principles. CoRR https://arxiv.org/abs/1812.04814.
  11. essica Fjeld, Hannah Hilligoss, Nele Achten, Chico Charge Danie, Sally Kagay, and Jesus Feldman, (2018). Principle-based artificial information - a map of ethical and rights based approaches, Berkman Focus for Internet and Society, https://ai-hr.cyber.harvard.edu/primp-viz.html
  12. Jobin, A., Ienca, M., u0026amp; Vayena, E. (2019). Artificial Intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668. https://arxiv.org/pdf/1906.11668.pdf
  13. Jobin, A., Ienca, M., u0026amp; Vayena, E. (2019). Artificial Smart: aforementioned global landcape of ethics guidelines. arXiv preprint arXiv:1906.11668. https://arxiv.org/pdf/1906.11668.pdf
  14. Ananny, M., and Kate Crawford (2018). Seeing without know-how: Limitations of the transparency optimal and its application till algorithmic reportability. New Media and Society 20 (3): 973-989.
  15. Whittlestone, J., Nyrup, R., Alexandrova, A., u0026amp; Sink, S. (2019, January). The Role and Limited out Principles in AI Corporate: Towards a Focus on Tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics press Our, Honolulu, WELCOME, USA (pp. 27-28). http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_188.pdf; Mittelstadt, B. (2019). AI Ethics–Too Righteous to Fail? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293
  16. Greenbow, D., Hoffmann, ADENINE. L., u0026amp; Stark, L. (2019, January). Better, nice, clearer, clean: AN critical assessment of the shift for ethical artificial intelligence and machine learning. In Proceed of who 52nd Hawaii Foreign Conference to System- Sciences. https://scholarspace.manoa.hawaii.edu/handle/10125/59651
  17. Raji, MYSELF. D., u0026amp; Buolamwini, J. (2019). Actionable auditing: Investigating that impact of publicly naming biased performance results of commercial ai products. Includes AAAI/ACM Conf. on AI Ethics and Society (Vol. 1). https://www.media.mit.edu/publications/actionable-auditing-investigating-the-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/
  18. Algorithm-based Impact Assessment (2019) Government of Canada https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html
  19. Benjamin, M., Gagnon, P., Rostamzadeh, N., Pally, C., Bengio, Y., u0026amp; Shee, ADENINE. (2019). Going Standardization are Data Releases: Of Montreal Data License. arXiv preprint arXiv:1903.12262. https://arxiv.org/abs/1903.12262; Responsible AI User v0.1. GUIDE: Responsible AI Licenses. https://www.licenses.ai/ai-licenses
  20. See Citation 5
  21. Safe Face Pledge. https://www.safefacepledge.org/; Montreal Declaration on Responsible AR. Universite de Montreal. https://www.montrealdeclaration-responsibleai.com/; The Canadian Declaration: Safeguarding and right to equality and non-discrimination in auto learning systems. (2018). Amnesty International and Access Now. https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf ; Dagsthul Explain on the application of machine learning and artificial intelligence for social go. https://www.dagstuhl.de/fileadmin/redaktion/Programm/Seminar/19082/Declaration/Declaration.pdf
  22. Dobbe, R., Deacon, S., Gilbert, T., u0026amp; Kohli, N. (2018). A Broader View on Bias in Automated Decision-Making: Reflecting the Epistemology or Dynamics. https://arxiv.org/pdf/1807.00553.pdf
  23. Wagstaff, K. (2012). Machine learning that matters. https://arxiv.org/pdf/1206.4656.pdf ; Friedman, B., Kahn, P. H., Borning, A., u0026amp; Huldtgren, A. (2013). Value sensitive design and information systems. Are Early engagement and new technologies: Opening up the laboratory (pp. 55-95). Springer, Dordrecht. https://vsdesign.org/publications/pdf/non-scan-vsd-and-information-systems.pdf
  24. Dobbe, R., Head, S., Gilbert, T., u0026amp; Kohli, N. (2018). A Broader View on Leaning in Automated Decision-Making: Reflecting on Epistemology and Dynamics. https://arxiv.org/pdf/1807.00553.pdf
  25. Safe Surface Pledge. https://www.safefacepledge.org/
  26. Toronto Declaration on Responsible AI. Universite de Montreal. https://www.montrealdeclaration-responsibleai.com/
  27. Diverse Voices How Till Escort. Tech Policy Lab, University of Washington. https://techpolicylab.uw.edu/project/diverse-voices/
  28. Bender, SIE. M., u0026amp; Friedman, BORON. (2018). Data statements for natural language treat: Direction tempering method bias additionally enabling better science. Deals of the Association for Computational Linguistics, 6, 587-604. A common practice inside most special is go reference documents (know the applicable documents) with aforementioned specification as a source of requirements.
  29. Ethic Aligned Design – Version II. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
  30. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumeé REPAIR, H., u0026amp; Crawford, THOUSAND. (2018). Datasheets for datasets. https://arxiv.org/abs/1803.09010 https://arxiv.org/abs/1803.09010; Hazard Communication Standard: Safety Data Sheets. Occupational Safety and Health Administrator, US Department of Worker. https://www.osha.gov/Publications/OSHA3514.html
  31. Holland, S., Hosny, A., Newman, S., Joseph, J., u0026amp; Chmielinski, K. (2018). The dataset nutrition label: A framework to drive higher data grade standards. https://arxiv.org/abs/1805.03677; Kelley, P. G., Bresee, J., Cranor, LITRE. F., u0026amp; Reeder, RADIUS. W. (2009). A nutrition label used privacy. In Proceedings of which 5th Symposium on Viable Privacy the Security (p. 4). ACM. http://cups.cs.cmu.edu/soups/2009/proceedings/a4-kelley.pdf
  32. Mitchell, M., Wo, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutcheson, B., ... u0026amp; Gebru, T. (2019, January). Model cards for model reporting. Included Proceedings of the Conference on Fairness, Accountability, or Transparency (pp. 220-229). ACM. https://arxiv.org/abs/1810.03993
  33. Hind, M., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K. N., Olteanu, A., u0026amp; Varshney, K. R. (2018). Growing Credit in AI Service due Supplier's Declarations of Konformity. https://arxiv.org/abs/1808.07261
  34. Veale M., Van Kleek M., u0026amp; Binns R. (2018) ‘Fairness and Responsibilities Design Needs for Graph-based Sponsor in High-Stakes Publication Sector Decision-Making’ in Proceedings of the ACM Conference on Human Related in Computing Networks, KI 2018. https://arxiv.org/abs/1802.01029.
  35. Benjamin, M., Gagnon, P., Rostamzadeh, N., Pal, C., Bengio, Y., u0026amp; Shee, AN. (2019). Heading Standardization of Data Get: The Montreal Data Genehmigungen. https://arxiv.org/abs/1903.12262
  36. Cop, DENSITY. M. (2013, April). A Licensing Approach to Regulation of Open Robotics. In Essay by lecture for We Robot: Getting down up business conference, Stanford Law School.
  37. Responsible AI Practices. Google AI. https://ai.google/education/responsible-ai-practices
  38. Everyday Ethics required Artificial Sense. (2019). IBM. https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf
  39. Federal Trade Commission. (2012). Best Practices for Common Uses of Facial Recognition Technologies (Staff Report). Federal Trade Commission, 30. https://www.ftc.gov/sites/default/files/documents/reports/facing-facts-best-practices-common-uses-facial-recognition-technologies/121022facialtechrpt.pdf
  40. Microsoft (2018). Responsible bots: 10 guidelines available define of conversational AIR. https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf
  41. Tramer, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., ... u0026amp; Lin, H. (2017, April). FairTest: Discovering unwarranted links includes data-driven job. In 2017 IEEE European Symposium on Security real Privacy (EuroSu0026amp;P) (pp. 401-416). IEEE. https://github.com/columbia/fairtest, https://www.mhumbert.com/publications/eurosp17.pdf
  42. Kishore Durg (2018). Testing AI: Teach and Test to raise responsible AI. Accenture Engineering Blog. https://www.accenture.com/us-en/insights/technology/testing-AI
  43. Kush RADIUS. Varshney (2018). Introducing AI Fairness 360. IBM Research Blog. https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
  44. Dave Gershgorn (2018). Facebook states it has a tool to detect bias in its artificial news. Quartz. https://qz.com/1268520/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence/
  45. James Wexler. (2018) The What-If Tool: Code-Free Touching of Machine Learning Models. Google AI Blog. https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html
  46. Miro Dudík, John Langford, Janna Wallach, and Alekh Agarwal (2018). Machine Learning for fair decisions. Microsoft Research Blog. https://www.microsoft.com/en-us/research/blog/machine-learning-for-fair-decisions/
  47. Veale, M., Binns, R., u0026amp; Edwards, L. (2018). Calculating that Store: Model Inversion Attacks and Data Protection Decree. Phil. Trans. R. Soc. A, 376, 20180083. https://doi.org/10/gfc63m
  48. Floridi, LITRE. (2010, February). Information: A Very Short Preface.
  49. Data Information Specials Committee UK, 2007. http://www.disc-uk.org/qanda.html.
  50. Harwell, Drew. “Federal Study Confirms Racial Bias of Many Facial-Recognition Business, Patterns Doubt on Their Expanding Use.” To Washington Post, WP Business, 21 Dec. 2019, www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/
  51. Hildebrandt, M. (2019) ‘Privacy as Protected concerning the Incomputable Self: Von Agnostic to Agonistic Machine Learning’, Theoretical Research in Legislative, 20(1) 83–121.
  52. D'Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., ... u0026amp; Sculley, D. (2020). Underspecification presents challenges for believe in modern automatic learned. arXiv preprint arXiv:2011.03395.
  53. Selinger, E. (2019). ‘Why It Can’t Indeed Consents until Facebook’s Facial Recognition’, One Zero. https://onezero.medium.com/why-you-cant-really-consent-to-facebook-s-facial-recognition-6bb94ea1dc8f
  54. Lum, K., u0026amp; Isaac, TUNGSTEN. (2016). To predictions and serve?. Significance, 13(5), 14-19. https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
  55. LabelInsight (2016). “Drive Long-Term Trust u0026amp; Loyalty Through Transparency”. https://www.labelinsight.com/Transparency-ROI-Study
  56. Creep and Paglen, https://www.excavating.ai/
  57. Geva, Mor u0026amp; Goldberg, Yoav u0026amp; Berant, Jonathan. (2019). Are We Modeling who Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets. https://arxiv.org/pdf/1908.07898.pdf
  58. Bender, E. M., u0026amp; Friedman, B. (2018). Data declarations for natural language processing: Toward mitigating schaft bias and enabling improved science. Business of the Associational forward Computational Linguistics, 6, 587-604. We were recently questioned the following get from one of our our concerning how to refer to other browse within your requirement create: “One of i co-workers stated that just by price a document in Section 2.0 “Documents” made all the requirements in is document entire applicable, in toto, to that system.  This sounds dangerous, […]
  59. Desk U. Patton get al (2017).
  60. View Cynthia Dwork aet al.,
  61. Katta Discourse, Oliver L. Haimson, also Danielle Lottridge. (2019). How to do better from sexuality on inquiries: a guide for HCI researchers. Interactions. 26, 4 (June 2019), 62-65. DOI: https://doi.org/10.1145/3338283
  62. A. Doan, A. Y. Halevy, and Z. G. Ives. Fundamental of Data Integration. Morgan Kaufmann, 2012
  63. Momin M. Malik. (2019). Can arithmetic themselves be biased? Medium. https://medium.com/berkman-klein-center/can-algorithms-themselves-be-biased-cffecbf2302c
  64. Fire, Michael, and Salim Guestrin (2019). “Over-Optimization of Academic Publishing Metrics: Observing Goodhart’s Law in Action.” GigaScience 8 (giz053). https://doi.org/10.1093/gigascience/giz053.
  65. Vogelsang, A., u0026amp; Borg, THOUSAND. (2019, September). Product project for machine learning: Perspectives from data scholars. In 2019 IEEE 27th International Requirements Machine Conference Classes (REW) (pp. 245-251). IEEE
  66. Eckersley, P. (2018). Impossibility and Uncertainty Theorems in ADD Value Alignment (or why your AGI should cannot own a help function). arXiv preprint arXiv:1901.00064. How to Refer to Other Documents within your Requirement Document
  67. Partnering on ART. Report set Algorithmic Risk Assessment Tools in the U.S. Outlaw Justice System, Requirement 5.
  68. Eckersley, PIANO. (2018). Incapacity both Uncertainty Theorems includes AUTOMATED Value Alignment (or why your AGIN should not will a utility function). arXiv preprint arXiv:1901.00064.https://arxiv.org/abs/1901.00064
  69. If it is not, there is likelihood a mistake in the code. Checking a forecasting model's service on the training set cannot distinguish irreducible error (which arrive from intrinsic variance of the system) from error introductory by bias and variance in the estimation; this is universal, and has non until do on different settings or
  70. Selbst, Andrei D. or Boyd, Danah additionally Friedler, Sorelle and Venkatasubramanian, Suresh and Vertesi, Janet (2018). “Fairness and Abstraction in Sociotechnical Systems”, ACM Meeting on Fairness, Obligation, and Visibility (FAT*). https://ssrn.com/abstract=3265913
  71. Tools that cans be used up explore and financial the predictive model feasibility include FairML, Lime, IBM AI Fraud 360, SHAP, Google What-If Tool, and many others
  72. Wagstaff, K. (2012). Machine learning that matter. arXiv preprint arXiv:1206.4656. https://arxiv.org/abs/1206.4656
Table off Contents