I tried fulltext('_allText', '"utprøvence"~2', 'OR') in Data Toolbox and got same results s for fulltext('_allText', 'utprøvence~2', 'OR'), although it was only one item called utprøvence in the data.
So, I currently can’t reproduce the behavior you describe.
Due to tokeization which uses space as a delimeter it won’t work.
I suspect you are tying to solve compound words in Norwegian language. Enonic XP uses Elasticsearch which does not provide this functionality out of the box.
You are also using fewer characters, not just misspelling. The levenshtein algorithm is handled by Elasticsearch, so you could always check out what they say about it?