Hi there,
I'm running into a problem, where queries that are distributed among multiple shards don't
return binary field data properly.
If I hit a single core, the XML response to my HTTP request contains the expected data.
If I hit the request handler that's configured to distribute the request to my shards, the
XML contains "B[B",
It looks like I wind up getting the .toString() data, not the
Greetings,
I am trying to integrate nutch 1.3 and solr 3.4. I am using bin/nutch crawl command with solr
param, but before to finish completly the process, I get the following output in my terminal:
SolrIndexer: starting at 2011-11-10 15:58:39
java.io.IOException: Job failed!
SolrDeleteDuplicates: starting at 2011-11-10 15:58:44
SolrDeleteDuplicates: Solr url: http://localhost:8983/solr/
SolrDeleteDuplicates
Hi,
I am using SOLR 1.4.1. When I search for empty string in a string field,
q=tag_facet:"", it return documents with values in tag_facet.
When I use the same query q=tag_facet:"", in SOLR 3.4, it is returning
only documents with "" string in tag_facet.
SOLR 3.4 works as expected. I just want to know whether it is an issue in
SOLR 1.4.1. Please advise.
--
View this message in context: http
Hi,
In SOLR 3.4, while doing a geo-spatial search, is there a way to retrieve
the distance of each document from the specified location?
I am aware of the workaround mentioned at
http://wiki.apache.org/solr/SpatialSearch/#Returning_the_distance, however
it doesn't work for me, since search terms also have to be included as part
of the query.
Thanks,
Anand
I use drupal for accessing the solr search engine. After updating an
creating my new index everthing works as before. Then I activate the
group=true and group.field=site and solr delivers me the wanted search
results but in Drupal nothing appears just an empty search page. I found
out that the group changes the resultset names. No problem solr offers
for this case the group.main=true parameter
I use drupal for accessing the solr search engine. After updating an
creating my new index everthing works as before. Then I activate the
group=true and group.field=site and solr delivers me the wanted search
results but in Drupal nothing appears just an empty search page. I found
out that the group changes the resultset names. No problem solr offers
for this case the group.main=true parameter. So
----- Forwarded Message -----
Hi folks,
It seems to me that with the multi-update operation in Zookeeper 3.4.x, it
should be possible to do a form of distributed STM?
The idea I have is this...
- For every zNode that I'll set or delete, I perform a get first, s.t. I
have its last version number.
- This means when I commit the transaction, I can add the version number
to the set and delete ops. The transaction would fail if
Hi,
Our index is divided into two shards and each of them has 120M docs , total
size 75G in each core.
The server is a pretty good one , jvm is given memory of 70G and about same
is left for OS (SLES 11) .
We use all dynamic fields except th eunique id and are using long queries
but almost all of them are filter queires, Each query may have 10 -30 fq
parameters.
When I tested the index ( same size
Hello Solr-users,
I am trying to search a patents dataset (in xml format) which
has fields like title, abstract, patent_ number,year of submission . Since
I would not want to specify the field name in a query, I have used a
catchall field, and using copyField, I copied all fields into it. I then
made it the default search field.
My schema looks something like:
Hi Guys,
I'm running a Clojure code inside Solr 3.4 that makes call to Mahout
.4 for some text clustering job. Due to some issues with Clojure I had
to put all the jar files in the solr war file ('WEB-INF/lib'). I also
made sure to put hadoop core and mapreduce config xml files in the
same location with a value of ('file:/// or
hdfs://localhosthost:9000..) for 'fs.default.name'.
HOWEVER i get the
I am trying to configure nutch 1.4 with solr 3.4.
I configured everything and when I run the command:
./nutch crawl urls -dir myCrawl2 -solr http://localhost:8080 -depth 2 -topN
2
I get the following error:
java.io.IOException: Job failed!
SolrDeleteDuplicates: starting at 2013-06-06 15:49:30
SolrDeleteDuplicates: Solr url: http://localhost:8080
Exception in thread "main" java.io.IOException
Solr,
We are looking to test Solr in a Tomcat setting and have discovered that the samples that
come with Solr are demonstrated with Jetty. Is there a tutorial that teaches how to rebuild
these samples within Tomcat? So far we have Tomcat running and a basic deployed version of
Solr that only shows a white page with "Welcome to Solr" in it but nothing else.
We are new to Solr and eager to learn,
Hi!
I am very excited to announce the availability of Solr 3.4 with
RankingAlgorithm 1.3.
This version supports NRT and can update 10,000 docs / sec (MbArtists
Index). MbArtists index is the example used in the Solr 1.4 Enterprise
Book, has 43 fields so is quite realistic.
RankingAlgorithm 1.3 is 50-100% faster than version 1.2 so you should
see a significant improvment in performance. An internal
Hi
I have indexed some 1M documents, just for performance testing. I have written a query parser
plug, when i add it in solr lib folder under tomcat wepapps folder. and try to load solr admin
page it keeps on loading and when I delete jar file of query parser plugin from lib it works
fine. but jar file works good with solr 3.3 and also with solr 1.4.
please help.
Regards
Ahsan
I'm in the process of updating from Solr 3.4 to Solr 4.6. Is the SolrJ 3.4
Client forward compatible with Solr 4.6?
This isn't mentioned in the documentation
http://wiki.apache.org/solr/javabin page.
In a test environment, I did some indexing and querying with a SolrJ3.4
Client and a Solr4.6 server and there were no errors. I'm using the javabin
format for updates and sharded queries
Compile error with nullptr on Clang 3.4/3.5
Hello guys,
I got compile error with nullptr on Clang on my Projects. I use the
android-ndk-10c-64bit on windows
In my code, i use something like this
std::shared_ptr a = std::make_shared(nullptr, b, c)); // Error here
with declare
Class A
{
A(Class B, int b, c);
};
Do you have any ideas to fix it ? I attach full error log here
0
Hello,
I'm using Solr 3.4, and I'm having a problem with a request returning
different results if I have or not a space after a coma.
The request "name, number rue taine paris" returns results with 4 words out
of 5 matching ("name", "number", "rue", "paris")
The request "name,number rue taine paris" (no space between coma and
"number") returns no results, unless I set mm=3, and then matching words
Hi there
We are currently moving from Solr 1.4 to 3.4 and we are seeing a few issues
with adding documents.
We do a delete by query and then do a lot of adds, about 100k before we do a
commit and optimise.
With 1.4 this was all fine, not super quick but didn't see any problems.
With 3.4 the rate of adding documents seriously degrades. For our one index
at about 80% it severely slows down but struggles