Hi,
I have an application that I want to pass properties into via the browsers query string, I want to pass these values into the configuration of a particular provider.
The problem with this is that the $location service is not available to the application during configuration.
What is the best way to grab parameters from the query string and pass them into a provider during the config phase?
After some performance improvements to the optimizer [1] [2], I wanted to try for real parallelizing one of the optimization phases (say, one of the "easier" ones). The hypothesis was that typer was going to be the single source of races. Thus I picked an optimization phase (dead code elimination) that does most of its work without constantly invoking Tree.tpe or Symbol.info (relatively speaking).
Hi there,
It seems that some of my jobs hang in the reduce phase for a very long time
(for example, days). Is there anything I could tweak on? The query is
pretty simple, like:
SELECT SUM(colA), to_date(colB) AS dt FROM table GROUP BY to_date(colB)
ORDER BY dt ASC;
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E
[email protected]
Disclaimer: The information
### I have this:
###
### 1111[serial]
### 1111[model]
### 1111[yr_blt]
### 1111[km]
### 1111[price]
### 1111[hours]
### 1111[details]
### 1111[location]
### double[style]
All of the 1 are the variables being passed, the word is hardcoded for clarity
What I get half the time: Couldn't insert dataQuery was empty
###
### The variables are obviously being passed
###
### Here's
Hi all,
Just wanted to let you know we just recently published a new ORM/query engine called patio, its inspired by Sequel and supports most of what sequel does.
Some of its features include:
Support for connection URIs and objects
Supported Databases
MySQL
Models
Associations
Simple adapter extensions
Migrations
Integer and Timestamp based.
Powerful query API
Transactions
Savepoints
Isolation
I am calculating the TF-IDF for a set of files on my system using the map
reduce library provided by MongoDB java driver. In my final reduce phase,
for a particular term, I am returning an array of all the corresponding
documents, their positions and the calculated tf-idf as shown below.
term1 : [docName1, position[], tf-idf] [docName2, position[], tf-idf]
[docName3, position[], tf-idf] ...
Just a quick clarification:
The combiner function acts as an optimization between the map and the reduce
phases. Is the output of the combiner phase stored in memory before being
handed to reduce? Or is it written to disk and subsequently read from disk
by the reduce phase?
Thanks in advance,
-SM
Testing
Thanks,
Saurav Sinha
*Software Engineer II*
*M*: +91 9643055174
238 OKHLA PHASE III, NEW DELHI 110 020, INDIA
Download Our App
[image: A]
[image: A]
[image: W]
hey all,
i've got a sharded mongodb setup with 2 shards, each shard a
replicaset of 2 mongod's and 1 arbiter. we're noticing some strange
behavior in the "index: (3/3) btree-middle" phase of map/reduce. i
think it is well known that this phase acquires a global write lock,
thus blocking all queries. it would be nice if this were eventually
addressed in a future release as we frequently see
Hi,
We are in implementing phase of Mongodb in our environment we need to setup monitoring.
below are the things i am considering initially
1) check whether mongodb process is running or not?
2) check the lag on the replicat
3) monitor basic load average and CPU usage
Can some one please suggest what additional monitoring i can setup?
Thanks,
Prasad
There are always a few 'Failed/Killed Task Attempts' and when I view the
logs for
these I see:
- some that are empty, ie stdout/stderr/syslog logs are all blank
- several that say:
2009-06-06 20:47:15,309 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
java.io.IOException: Filesystem closed
at org.apache.hadoop.dfs.DFSClient.checkOpen(DFSClient.java:195)
at org.apache.hadoop.dfs
Hi,
I am trying to monitor the time to complete a map phase and reduce
phase in hadoop. Is there any way to measure the time taken to
complete map and reduce phase in a cluster.
Thanks,
Amit
--
View this message in context: http://lucene.472066.n3.nabble.com/measure-the-time-taken-to-complete-map-and-reduce-phase-tp3136991p3136991.html
Sent from the Hadoop lucene-users mailing list archive at Nabble
Hi All,
I am running Nutch on a single node Hadoop cluster , I do not use a
indexing URL and I have disabled the LinkInversion phase as I do not need
any scores to be attached to any URL.
My question is that if LinkInversion phase in Nutch is the only phase that
requires the Reduce task to be run , as since I have disabled it in the
Crawl.java class, can I go ahead and set the number of
Hi,
We are simulating a drive failure scenario where one drive pulled
physically from rack when the Reduce phase just started. This means that
entire map phase completed, map outputs are committed, shuffled and fetched
by the reducers.
Now the intermediate data is available for reduce phase.
Exactly here, we have failed the drive.
The reduce tasks started failing with error messages in
I am using hadoop 1.2.1 and hbase 0.94.12 . I have a scenario where in
reducer phase i have to read data check if the key of map is already
inserted and then put in to hbase table. When i tried in single node all
gets and scans worked. But when i tried with 3 node cluster, scanning does
not work. Can anyone help me with this part.
View this message in context: http://apache-hbase.679495.n3.nabble
If I am calling scope.$digest I expect the $$phase to be set as "$digest" only for that scope and its children.
In my case, I have two scopes:
scopeA < $rootScope
scopeB < $rootScope
scopeA and scopeB are not related, so I can call separately scopeA.$digest()� and scopeB.$digest(). It occurred that accidentally only in one case I need to call scopeA.$digest() as a consequence of something happening
Hi guys,
I'd like to understand how MongoDB map/reduce works in a sharded
environment. Running map/reduce in MongoDB offers quite a few features
and I'm not sure I understand the complete behavior.
Specific questions:
- (if defined) how are sort and limit applied?
- is the reduce phase run on all shards?
- is there a shuffle phase before reduce is run?
- what happens when keeptemp/out are
Hi guys,
I'd like to understand how MongoDB map/reduce works in a sharded
environment. Running map/reduce in MongoDB offers quite a few features
and I'm not sure I understand the complete behavior.
Specific questions:
- (if defined) how are sort and limit applied?
- is the reduce phase run on all shards?
- is there a shuffle phase before reduce is run?
- what happens when keeptemp/out are
Hi all
How this can be possible
"msg" : "m/r: (1/3) emit phase 68169953/67979195 100%"
BTW I mapreduce with query: {_id: {$gt: someId, $lt: someId}}, and I don`t update _id field
Hi
I got nutch working on my cluster after making necessary changes to my
crawlfilter file. It seems to be working.
I fired a crawl command two days back to the nutch cluster to crawl a
list of 20 websites to a depth of 8.
As of now I think it's fetching at depth 3 and the segment files
generated are almost 70MB in size. I opened the files and I could see
valid URLs.
The first two fetch phases