Can algorithms be moral? Can we train data sets that can reduce biases in search and other technology fields and reflect more diversity and inclusion in these results?
The debate has started a while ago starting with Article 12 of the universal declaration of human rights and not ending with the recent GDBR regulations and adaptation in the EU. How can we as technology-savvy individuals and businesses start the discussion around diversity, inclusion, social and ethical impacts of AI and big data when we design, engineer, collect and analyze customers data?
And why it is important more than ever to design user journeys that has customers privacy in its core but still deliver personalized experiences?