The Deep Analysis feature is meant to help users better understand the predictions that our models make. When scanning a text we provide an overall document probability of AI, and also highlight which sentences are likely to be AI-generated. This answers the "what", but not the "how" behind our predictions. Deep Analysis goes further by quantifying the impact of each sentence on the overall document probability of AI. It does so by answering the question "how would the model's prediction change if a given sentence was modified or removed"? This is implemented using a popular machine learning interpretability method (see detailed explanation below for technical details).
To use the Deep Analysis feature, enable the Deep Analysis switch prior to clicking the "Scan text" button as shown below.
Once scanning is finished, the following view will appear
Sentences are highlighted orange if they contribute to the model having increased confidence that the document was AI-generated, or they are highlighted green if they increase the model's confidence that a document was written by a human.
Hovering over the bar will bring up a tooltip with the sentence corresponding to region being hovered over. If the sentence is in view, it will be highlighted blue. Otherwise, clicking the region will bring the relevant sentence into view.
Hovering over a sentence will bring up a tooltip with that sentence's exact score
Lastly, the most important sentences are provided in sorted order at the bottom of the view, along with their score for a quick glance that may be more readable for longer documents.
Please submit a request to obtain a detailed explanation about how these scores are computed.