Helllo,
We are experiencing unexplained OOM crashes. We have already seen it a few
times, over our different solr instances. The crash happens only at a
single shard of the collection.
Environment details:
1. Solr 4.3, running on tomcat.
2. 24 Shards.
3. Indexing rate of ~800 docs per minute.
Solrconfig.xml:
1. Merge factor 4
2. Sofrcommit every 10 min
3. Hardcommit every 30 min
Hello again,
After a heavy query on my index (returning 100K docs in a single query) my
JVM heap's floods and I get an JAVA OOM exception, and then that my
GCcannot collect anything (GC
overhead limit exceeded) as these memory chunks are not disposable.
I want to afford queries like this, my concern is that this case provokes a
total Solr crash, returning a 503 Internal Server Error while
0
Hi,
We sometimes see the generator running OOM. This happens because we
either have a too high topN value or too many segments to generate. In
any case, a very large amount of records is being generated with the
same (lowest) score and end up in a single reducer. We limit the
generator by domain which may be a source of trouble.
I've not yet found a way around this problem so i'm looking
boundary=""
Hi,
I get out of memory error everytime I try to include "-pointsDir" in the list of parameter to Clusterdumper. Is there any other alternate way to read the points belonging to the clusters without increasing the heapsize? Any suggestions? I have already tried by increasing : "JAVA_HEAP_MAX and MAHOUT_HEAPSIZE" in bin/mahout but is not helping.
Thanks and regards,
Sohini
V2UgYXJlIGN1cnJlbnRseSBldmFsdWF0aW5nIGNhc3NhbmRyYSAyLjAgdG8gYmUgdXNlZCB3aXRo
IGEgUHJvamVjdC4NCg0KVGhlIGNsdXN0ZXIgY29uc3Rpc3RzIG9mIDUgaWRlbnRpY2FsIG5vZGVz
IGVhY2ggaGFzIDE2R2IgUkFNIGFuZCBhIDYgY29yZSBYZW9uIGFuZCAyVEIgaGFyZGRpc2suDQoN
ClRoZSBoZWFwIG1heCBzaXplIGlzIGRlZmluZWQgd2l0aCA4R2lnIGFuZCByb3dfQ2FjaGVfc2l6
ZV9pbl9tYj0wDQoNClRoZSBsYXN0IHRlc3Qgd2FzIGEgd3JpdGUgdGVzdCwgcnVucyBzZXZlcmFs
IGRheXMgKHdpdGggbmVhcmx5IG9ubHkgd3JpdGUgcmVxdWVzdHMpIGFuZCBpbnNlcnRzIDg1MC4w
I am still struggling with the JVM. We just had a hard OOM crash of a region
server after only running for 36 hours. Any help would be greatly
appreciated. Do we need to restart nodes every 24 hours under load? GC
Pauses are something we are trying to plan for, but full out OOM crashes are
a new problem.
The message below seems to be where it starts going bad. It is followed by
no less than 63 Concurrent
All,
I have a Derby table with up to a million rows. Some large subset of those rows may be returned
by a SELECT query.
I am using IBATIS quertyForList with the embedded Derby driver. I am using the version which
has a maxRows parameter. I call that method and receive back the proper "maxRows" java.sql.ResultSet
objects. For example, the select would match 100,000 rows, but I only get
There is a low-level memory "leak" (really an unfortunate retention)
in Lucene which can cause OOMs when using the Tika tools on large
files like PDF.
A patch will be in the trunk sometime soon.
http://markmail.org/thread/lhr7wodw4ctsekik
https://issues.apache.org/jira/browse/LUCENE-2387
--
Lance Norskog
[email protected]
Hi,
I'm trying to run groupBy(function) followed by saveAsTextFile on an RDD of
count ~ 100 million. The data size is 20GB and groupBy results in an RDD of
1061 keys with values being Iterable. The job runs on 3 hosts in a standalone setup with each host's
executor having 100G RAM and 24 cores dedicated to it. While the groupBy
stage completes successfully with ~24GB of shuffle write, the