The internet can be an equalizer to address many of the world's inequities. However, it also has the potential to promote harmful content and ideas. Research at Emakia Tech has led to the development of systems that filter out harassment and cyberbullying on social media to protect the receiver and create a safer online environment. The goal is to provide digital security for harassed users such as women, minorities, and LGBT groups.
This 2Chat will explore the components involved in building machine learning operations (MLOps) for Emakia Tech's content filtering tool. Emakia's development experience—from notebooks to production—spans BigQuery, Vertex AI, datasets, training AutoML text classifiers, and prediction. Emakia will cover the steps and technology used, including:
- Building a pipeline of real-time Twitter data by collecting tweets with the Twitter API toolkit for Google Cloud
- Using Kaggle datasets, Vertex AI NLP, and AutoML to store, label, and validate text classifications
- Streamlining the process of taking ML models to production and then maintaining and monitoring them
Visit our previous content about AI and Machine Learning and join the C2C AI and Machine Learning Community to continue the conversation.