Hallo every body,
I have develop a tool that turns on a sharded cluster with four shards
(servers) and I perform tests using successively one two three and four
shards.
When I use one shard, find() queries are very fast (3500 queries per
second) for a certain volume of data. the same find() queries, for the same
volume of data , now using two, three and four servers (shards) are very
I am converting a single MongoDB replica set to a sharded cluster using this guide.http://docs.mongodb.org/manual/tutorial/convert-replica-set-to-replicated-shard-cluster/
I have a 3 node replica set and 3 config servers running. When I get to starting mongos with the configdbs, I receive an error. See below.
mongos --configdb config1:30000,config2:30000,config3:30000 --port 27017 --chunkSize 1
Hi,
I am experiencing a problem with mongos on my sharded cluster.
The problem is that my mongos refuses to start :
mongodbserver-front-01:~# service mongodb start
[FAIL] Starting database: mongodb failed!
andnothing is happening in my log (/var/log/mongodb/mongodb.log).
All my mongodb/mongos configurations (/etc/mongodb.conf) are configured not to use authentication mecanism :
noauth = true
#
Hi all,
I'm unable to backup a sharded mongodb cluster
/mongo-metadata/backuptest# mongodump --directoryperdb --oplog --out dump
13106 nextSafe(): { $err: "can't use 'local' database through mongos",
code: 13644 }
Apparently you cannot do a dump with --oplog when using a sharded
collections. But how do you do consistent backups then, without locking
everything?
MongoDB version
Hi all,
I have a sharded cluster with 4 shards (each having 3 members in replica set), 3 config servers and 2 mongos. All mongodb servers are in separate machines but the 2 mongos servers are running on the same machine but on different ports (just for testing). I am using mongo php driver 1.0.9 for this. So, I am able to connect to either one of the mongos servers individually but not when I give
Hi,
I have a sharded cluster setup with replica set. ie. Each shard is a replica set containing 3 nodes. Likewise there are two shards in total.
Along with this I have 3 config servers and a single mongos instance.
All the things are working as expected but to my surprise the update and delete queries are not responding well.
I have sharding enabled on all the collections and has defined the shard
I setup security authentication on one of my mongodb clusters.
What I did was:
1) create and distributed a keyfile on each mongod, config server and
mongos node
2) enabling security in the configuration yaml file:
security:
keyFile: /data/keyfile
clusterAuthMode: keyFile
authorization: enabled
3) restart each node
what happened was:
a) mongod primary and secondary
Okay this is how my setup is :
Config Server:
ip0:27019
Mongos:
ip0:27018
Shards :
ip1:27018
ip2:27018
ip3:27018
I have JSON files of all collections of a database (generated using
mongoexport).
Now I want to mongoimport these files into the above sharded database.
So i do the following :
$ mongoimport --host ip0:27018 --username admin --db DBName --collection Col
This
Hi Everyone,
I am in the process of migrating a replica set from a data center to AWS,
but when I checked the settings, I noticed that the Sharding is not enabed,
yet they are using 3 MongoS / Routers and 3 MongoC / Config servers - So
since Sharding is not used in the current system, would it be safe to
migrate the replica sets and not use MonogoS nor MongoC ? Can the users
connect directly
Hello,
I have a Mongo cluster that consists of:
- 1 Arbitrator
- 3 Config Servers
- 3 Replica sets
- 3 Shards per replica
I am attempting to setup server-to-server SSL with a wildcard SSL
certificate from Digicert, I've generated a certificate for each subdomain
in the cluster as recommended the documentation, however when I start the
nodes I get the following error in
As part of our data quality management procedures I have been tasked to setup a daily job that runs a validation check across all the collections in our mongo sharded cluster. When I run db.collection.validation(true) from mongoS of our sharded cluster, the process is so intensive that it basically brings the entire sharded cluster to a halt. In addition, while running the validation process the
I'm currently trying to deploy a "mongod 2.6.5" sharded cluster on centOS
7 VMs where firewalls are disabled and time synchronized with ntp.
I'm following the "Deploy a sharded Cluster" tutorial from :
http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/
*** Starting config servers *
*mongod --configsvr --dbpath /data/configdb --port 27019*
They started correctly and I can
I am using MongoDB 2.6, with Java Driver 2.11.3, in a Sharded Cluster, Using hash partitioning.
I have a collection, with a Spatial field, on which I have created a 2d index as:
db.myColl.ensureIndex( { "sender_location" : "2d"} )
Now I am trying to run a geoNear query via *Java driver* (which I had tested before on a single machine (no sharding) successfully) and I keep getting an error, asking
Hi,
My mongos process on all 3 servers has shut down unexpectedly at 645am for the past 3 days.
It might be coincidental, but it was soon if not immediately after we upgraded from 2.4.2 to 2.4.3.
One possibility we are investigating is that the daily snapshots are scheduled (or finished) right around that time. Not sure how that would cause mongos to shut down though on all 3 servers.
Also
Hi, Buddy,
We want to migrate the mongodb cluster from IDC1 to IDC2. The mongo
version is v2.2. data size is about 10 TB with shards and replica set.
I read the document about backup stategies
http://docs.mongodb.org/v2.2/administration/backups/#sharded-cluster-backups,
it is difficult to dump large backup file.
The step i considered is as following :
1. add a
Hi Experts
I am new to MongoDB world and trying to drop database from sharded cluster
using below command. Unfortunately I still can't drop the database and it's
showing up in show db again.
mongos> db.dropDatabase()
mongos> use shmt
switched to db shmt
mongos> db.dropDatabase()
{ "dropped" : "shmt", "ok" : 1 }
mongos> show dbs
admin (empty)
config
I'm currently moving a replica set to a sharded cluster. When I first
enable sharding on the collection, will it take a long time to initialize?
The collection is about 700gb. I just want to know what kind of downtime I
should expect to have. In my tests with a small cluster it was instant but
that was with significantly less data.
You received this message because you are subscribed to
Greetings -
I have a sharded cluster that I am in the process of migrating to AWS. It
consists of 3 Mongos, 3 Config servers and 2 shards with 3 replication
each. 12 serves all together.
All the new servers have been created on AWS and all have Mongo running.
Based on the mongo link below, I have to performed the steps below as
follows so far:
*> Migrate a Sharded Cluster to different
hello everybody,
I am using a sharded environment with 3 configsvr and two shards.
the insertion of benchmark data set is very good. but when I make a set of
find into that collection (10,000,000 documents), the query time is very
slow.
exactly, I make 500 find operations in the for loop.
the same query in a non sharded data set (10,000,000) is very efficient
(1500 query per second)
I
Shard in our cluster permanently utilize disk up to 100%, but read or write
data rate is aprox. 3-8Mb/s.
to that collections only findOne by shard key.
Other collections are time based - one collection per day. Memory
consumptions of one of shards is in attachments.
I see, that cache size correlates with inactive memory size. And
periodically cache flushes.
What could be a reason of such