Instagram is photo-sharing app, whilst Google Plus, also known as Google+, is a social network based around "circles" of friends and colleagues.
The Facebook-owned company has promised to remove any hate speech post within 24 hours of it being reported.
On average they removed 70% of material deemed to be offensive.
The European Union (EU) has announced that tech companies have gotten better at removing illegal hate speech from the social platforms since joining in with a voluntary Code of Conduct initiated by the EU.
Welcoming the rising commitment of the companies - the rate of removals has steadily increased from 28 per cent in the first monitoring round, in 2016, to 59 per cent in the second round, in May 2017 - the commissioner for justice, consumers and gender equality, Vera Jourova, said their progress would continue to be closely monitored.
"It is time to balance the power and the responsibility of the platforms and social media giants", she said.
Aside from removing more content, those involved are also responding to notifications within 24 hours more often; 81% of requests are answered in 24 hours up from 51% in the last round of monitoring.
Jourova said the results unveiled today made it less likely that she would push for legislation on the removal of illegal hate speech.
'I do not hide that I am not in favour of hard regulation because the freedom of speech for me is nearly absolute, ' Jourova told reporters in December.
The most common ground for hatred identified by the Commission was ethnic origins, followed by anti-Muslim hatred and xenophobia, including hatred against migrants and refugees.
"These latest results and the success of the code of conduct are further evidence that the Commission's current self-regulatory approach is effective and the correct path forward." said Stephen Turner, Twitter's head of public policy.
Following pressure from several European governments, social media companies stepped up their efforts to tackle extremist content online, including through the use of artificial intelligence.
YouTube said it was training machine learning models to flag hateful content at scale.
He added: "I strongly encourage IT companies to improve transparency and feedback to users, in line with the guidance we published a year ago".
"We've learned valuable lessons from the process, but there is still more we can do".