FIELDDATA] Data too large, data for [parent/child id cache]

Enonic version: 6.12.3
OS: Ubuntu 16.04.3 LTS

We are getting this message in our logs:

WARN org.elasticsearch.indices.breaker - [local-node] [FIELDDATA] New used memory 1246693904 [1.1gb] from field [parent/child id cache] would be larger than configured breaker: 1245315072 [1.1gb], breaking
WARN org.elasticsearch.index.warmer - [local-node] [storage-cms-repo][0] failed to warm-up fielddata for [_parent]
org.elasticsearch.ElasticsearchException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [parent/child id cache] would be larger than limit of [1245315072/1.1gb]
at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:80)
at org.elasticsearch.search.SearchService$FieldDataWarmer$1.run(SearchService.java:888)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.elasticsearch.common.util.concurrent.UncheckedExecutionException: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [parent/child id cache] would be larger than limit of [1245315072/1.1gb]
at org.elasticsearch.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
at org.elasticsearch.common.cache.LocalCache.get(LocalCache.java:3937)
at org.elasticsearch.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:167)
at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:74)
… 4 common frames omitted
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [parent/child id cache] would be larger than limit of [1245315072/1.1gb]
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.circuitBreak(ChildMemoryCircuitBreaker.java:97)
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:148)
at org.elasticsearch.index.fielddata.RamAccountingTermsEnum.flush(RamAccountingTermsEnum.java:71)
at org.elasticsearch.index.fielddata.RamAccountingTermsEnum.next(RamAccountingTermsEnum.java:89)
at org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadDirect(ParentChildIndexFieldData.java:114)
at org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadDirect(ParentChildIndexFieldData.java:65)
at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$1.call(IndicesFieldDataCache.java:180)
at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$1.call(IndicesFieldDataCache.java:167)
at org.elasticsearch.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
at org.elasticsearch.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
at org.elasticsearch.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
at org.elasticsearch.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
at org.elasticsearch.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
… 8 common frames omitted

Should we try to reindex the database? Shoud we vacuum?

I have fixed that by increasing the heap space, export JAVA_OPTS="-Xmx4096m"

As discussed on Slack, yes this exception is due to a lack of heap space.

But I think that in your case, you have an installation that has been created a long time ago and performing many creation/update/deletion.
Vacuum should solve the problem but from what I saw while it cleans well the index documents and the blob, it does not seem to have an impact on fielddata.
I will create an issue to fix this problem.

Until then, yes, increasing the heap space is the workaround. :+1:

Yes, we delete/create/update a lot of content (logs) in Enonic every day.

I got that error again. Has it been fixed on new releases?

From what I can see, in Enonic XP 6.15.5, vacuuming will correctly evict the obsolete data from fielddata (I must have been too fast when checking in June. But I will check for Enonic XP 6.12.3).
So I recommend to vacuum your installation (The process will take time for this quantity of obsolete data).

Also, it would be better to store this kind of data in a separate repository (not in the content repository “cms-repo”).
Since 6.11 you can create your own repositories and create nodes using the JS libraries: ‘repo’ and ‘node’
http://repo.enonic.com/public/com/enonic/xp/docs/6.15.5/docs-6.15.5-libdoc.zip!/module-repo.html

1 Like

I have the same error (.
Elastic consumes a lot of RAM.

Is there configuration to limit amount of RAM ES can use?

This is a 4 years old thread! For XP6. Do you use XP6?

Sorry, i didn’t notice that it is For XP6 thread.
No, I use 7.11.1 version

I have tried to use vacuum and it works very well ).

It cleared ES caches.
I consider to add it to cron job to execute once a week or once a day.

Once a week should be plenty.

1 Like