FIELDDATA] Data too large, data for [parent/child id cache]

Enonic version: 6.12.3
OS: Ubuntu 16.04.3 LTS

We are getting this message in our logs:

WARN org.elasticsearch.indices.breaker - [local-node] [FIELDDATA] New used memory 1246693904 [1.1gb] from field [parent/child id cache] would be larger than configured breaker: 1245315072 [1.1gb], breaking
WARN org.elasticsearch.index.warmer - [local-node] [storage-cms-repo][0] failed to warm-up fielddata for [_parent]
org.elasticsearch.ElasticsearchException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [parent/child id cache] would be larger than limit of [1245315072/1.1gb]
at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: org.elasticsearch.common.util.concurrent.UncheckedExecutionException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [parent/child id cache] would be larger than limit of [1245315072/1.1gb]
at org.elasticsearch.common.cache.LocalCache$Segment.get(
at org.elasticsearch.common.cache.LocalCache.get(
at org.elasticsearch.common.cache.LocalCache$LocalManualCache.get(
at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(
at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(
… 4 common frames omitted
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [parent/child id cache] would be larger than limit of [1245315072/1.1gb]
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.circuitBreak(
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(
at org.elasticsearch.index.fielddata.RamAccountingTermsEnum.flush(
at org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadDirect(
at org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadDirect(
at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$
at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$
at org.elasticsearch.common.cache.LocalCache$LocalManualCache$1.load(
at org.elasticsearch.common.cache.LocalCache$LoadingValueReference.loadFuture(
at org.elasticsearch.common.cache.LocalCache$Segment.loadSync(
at org.elasticsearch.common.cache.LocalCache$Segment.lockedGetOrLoad(
at org.elasticsearch.common.cache.LocalCache$Segment.get(
… 8 common frames omitted

Should we try to reindex the database? Shoud we vacuum?

I have fixed that by increasing the heap space, export JAVA_OPTS="-Xmx4096m"

As discussed on Slack, yes this exception is due to a lack of heap space.

But I think that in your case, you have an installation that has been created a long time ago and performing many creation/update/deletion.
Vacuum should solve the problem but from what I saw while it cleans well the index documents and the blob, it does not seem to have an impact on fielddata.
I will create an issue to fix this problem.

Until then, yes, increasing the heap space is the workaround. :+1:

Yes, we delete/create/update a lot of content (logs) in Enonic every day.

I got that error again. Has it been fixed on new releases?

From what I can see, in Enonic XP 6.15.5, vacuuming will correctly evict the obsolete data from fielddata (I must have been too fast when checking in June. But I will check for Enonic XP 6.12.3).
So I recommend to vacuum your installation (The process will take time for this quantity of obsolete data).

Also, it would be better to store this kind of data in a separate repository (not in the content repository “cms-repo”).
Since 6.11 you can create your own repositories and create nodes using the JS libraries: ‘repo’ and ‘node’!/module-repo.html

1 Like

I have the same error (.
Elastic consumes a lot of RAM.

Is there configuration to limit amount of RAM ES can use?

This is a 4 years old thread! For XP6. Do you use XP6?

Sorry, i didn’t notice that it is For XP6 thread.
No, I use 7.11.1 version

I have tried to use vacuum and it works very well ).

It cleared ES caches.
I consider to add it to cron job to execute once a week or once a day.

Once a week should be plenty.

1 Like