Machine learning is being used in a variety of software and services along with technological advances. Twitter announced an initiative called the Responsible Machine Learning Initiative to develop a new responsible, responsive, community-driven machine learning system.
This initiative is made up of four major frameworks. Taking responsibility for algorithmic decisions. Also, giving equality and fairness to results. Third, ensuring transparency about the decision and the process until it is reached. The last is to enable agency and algorithm selection.
Twitter also says that responsible use of technology needs to be investigated for possible impacts over time. In fact, the introduction of machine learning into the Twitter service can affect hundreds of millions of tweets per day, and in some cases, the system design method may initiate a different behavior than intended. Since these subtle changes can affect Twitter users, we plan to investigate the impact of the changes and prepare thoroughly to build better products.
However, there are cases in which technical solutions alone cannot solve the harmful effects lurking in decisions made by algorithms. Therefore, a working group engaged in responsible machine learning will consist of people belonging to various teams, including Twitter’s technology and research, trust and safety.
Twitter said that it is the machine learning ethics, transparency, and responsibility META team that leads responsible machine learning. It said it helps to set priorities.
According to Twitter, the Responsible Approach to Machine Learning conducts detailed analysis and investigations to investigate and understand the impact of machine learning decisions and assess whether there are potential risks to the algorithms used, and analyzes that will be accessible in the coming months. It is said that it is showing. In addition, the analysis data that Twitter opens to the general public is as follows.
Image algorithm gender and racial bias analysis. Race Subgroup Overall Timeline Recommendations Fairness assessment. This is an analysis of content recommendations for various political ideologies across 7 countries.
The META team investigates how the system works and uses the results to enhance the user experience on Twitter. Therefore, there is also the possibility of adding new standards to how policies are designed and built if content is added to the product, such as removing algorithms and enabling more efficient management of tweet images, and has affected a particular community. These changes do not necessarily lead to visible product changes, but they can lead to increased awareness and serious discussion about how to build and apply machine learning.
Also, if you share and ask for feedback on responsible machine learning on Twitter, Twitter will improve the industry’s understanding of machine learning and improve its approach. Twitter explains that it is sharing learning and best practices to fulfill its responsibilities. Related information can be found here.
Add comment