Frequently Asked Questions

The most popular questions with relevant answers.

2024-07-15 10:45:37

Is data segregated by org?

Yes, the data is separated by org. Every tenant on our service lives in a separate database schema. Data is not shared between tenants, and we don't use data from one tenant to train any AI models used by another tenant.

Can anyone outside the org access it?

Only Four/Four support personnel can access your tenant, and our process is that they only do that when directed by you to solve an issue or provide guidance.

Can a user opt-in/opt-out of the notetaker?

Every individual user in your company can have a different setting for when the notetaker joins their meetings. From "always join every external meeting in my calendar" to "only join meetings I explicitly invite the notetaker to", and a couple of variations in the middle to provide a good level of control vs administrative burden.

If we accidentally let the notetaker in, can I remove it from the call on Zoom or Google Meet?

Yes, aside from the native controls to remove a participant from meetings in Zoom, Google and Teams, the Four/Four hub will allow any team member to remove the notetaker remotely.

How do I remove a recording or insights from the platform?

You can delete any insight or any conversation from the platform. Or to just modify an insight, e.g. to remove a specific reference to a project that should be protected.

When the notetaker joins a call, will it let the participants know the call is being recorded?

Yes, we have a very clear display stipulating that the call is being recorded, with directions to access the website to find out more.

Where do you store your data?

Yes, all the data at rest is stored in the UK (Microsoft Azure region).

How do you ensure the accuracy and reduce bias in your AI's analysis?

We work hard to ensure our machine learning models are accurate and fair. We gather a wide range of data to train the models and use specific techniques to spot and reduce bias. These techniques include fine-tuning our models on carefully selected datasets to balance diverse perspectives, and employing adversarial training to teach our models how to identify and counteract biased inputs. Our human reviewers check the models' outputs periodcally and provide feedback to improve. Regular reviews ensure our models treat different groups fairly. We also follow ethical guidelines and laws to make sure we use AI responsibly.

Join our rhythm

Subscribe to harness the power of customer-centric insights.

Your beat...

Thanks for subscribing!

We use cookies as specified in our Privacy Policy. You agree to consent to the use of these technologies by clicking Allow Cookies.