My team is in pre-production and were chasing some odd behaviors that we 
believe are contributing to stability issues with our application and our
Cassandra cluster.
First some environment information:
Akka version - 2.4.11
Cassandra Version - 3.3 - 3 node Cassandra in 2 DC
Now based on my understanding of Akka persistence in a health system I'd
expect my writes to far outpace my reads in the journal since the
persistent actor should only execute a journal read on creation to see if
there is any journal to playback. After my actor receives RecoverCompleted
I'd expect no more reads and for writes to dominate my journal
transactions. Based on our datadog graphs we're seeing our reads being
constant and often outpacing our writes. I also encountered similar
behavior when I did a simple test with the DynamoDB journal plugin. I
provisioned 15 for both reads and writes per second. When I ran my simple
developer test I looked at my cloudwatch graphs and saw my reads exceeded
provisioned readers but the writes were within expected parameters. So am I
missing something in the Akka infrastructure that is constantly reading
from the journal table?
Regards,
Richard
Richard Ney 's gravatar image asked Jan 11 2017 at 10:10 in Akka-User by Richard Ney

0 Answers

Related Discussions

  • Would Akka Be Overkill For A Simple Worker That Reads Messages Off A Queue? in Akka-user

  • Say for a typical web application you add a message on a queue so the operation is performed in a asynch nature for things like: sending registration email Normally I write a message to a queue, and then a background worker would pull the message from the queue and then send the email. For operations like this where the worker thread doesn't communicate with anything but maybe a 3rd party library...

  • Mutable State, Multiple Concurrent Reads And Single Writes in Akka-user

  • Hi hAkkers, I have a class, which holds a mutable state. The state is read by multiple actors. Meanwhile it could be modified by an other actor. My current approach is to synchronize the reads and writes by a ReadWriteLock. class MyClass {   private val lock = new ReadWriteLock()   private var _state = 0   def state = {     lock.ReadLock.lock     val state = _state     lock.ReadLock...

  • Rename Of Table That Receives A High Volume Of Reads in Mysql-general

  • Hello, I have a fairly sizeable aggregate table that is built from a large amount of data that is going to receive a large volume of reads. I am looking at possibly using the rename to swap the table out regularly with an updated one. Is this a viable solution for a table that experiences a large number of reads? Will it work? What are the drawbacks? Having never done this with MySQL...

  • High Physical And Buffer Reads. in Mysql-general

  • ...

  • Phantom Reads Of Count() in Mongodb-user

  • I am using Mongo 2.2.0. I populate a collection of 1M documents: var documents = 1000000; db.example.drop(); for (var i=0; i < documents; i++) {     db.example.insert({_id:i, value:0}); } Then I start to watch with an independent connection the number of documents inside the collection: while (true) {     print("find(filter).count(): " + db.example.find({value:0}).count());     print("count(...

  • Global Lock Queue Extremely High, Reads With Tons Of Yields in Mongodb-user

  • I'm running a mongo cluster with 3 beefy physical servers in a sharded/replicated setup on mongodb 2.0.1. Today our site had unusually high traffic due to some great marketing, but it fell over pretty quickly due to the global lock. Normally our global lock queue is 0 even at peak traffic, but during our outage today the global lock queue (on each replset primary) gradually climbed up to 10k over ...

  • Handling Of Preferred Reads in Project-voldemort

  • We are planning the following change to how get() is routed from the client. This is how it is done today : For get(), we do reads from #preferred servers in parallel. if we get less than #required responses : until we see #preferred responses : do serial reads from remaining servers So, in this approach, if #preferred > #required, we might end up talking to a server in a different...

  • MongoDB Reads Timeouts When There High Reads/writes Happen In Parallel in Mongodb-user

  • Here is my typical pattern for uploads/insert. I do get a set of keys { 100000 per 5 minutes }, which needs to be added in the mongodb   or updated the key if it is already existing by merging the data with the existing id. I first check if the key exists in the db, if so merge it with the new key. Then the whole set of rows are written to the db. To simply the process I use MondoDBCollection::...

  • Clarification Regarding HBase Reads in Hbase-user

  • Hi, I was going through the HBase architecture blog by Lars George ( http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html) and I just wanted a clarification regarding how HBase reads data. The blog mentions that : Next the HRegionServer opens the region it creates a corresponding HRegion object. When the HRegion is "opened" it sets up a Store instance for each HColumnFamily for ...

  • Consistency Level For Reads in Mongodb-user

  • Is there anything on the cards to have cassandra like read consistency level for Mongo as well ? I want to read from secondaries but not unless I know for sure it is the latest write. For writes, I can set w=(N/2) + 1, where N is the number of replicas. If I can set a similar formula for reads as well, I will be ensured I am reading the latest write all times (almost all times, there are still...

  • Bundling Reads With Pymongo in Mongodb-user

  • I'm currently in a situation where I'm trying to index a large number of Mongo Documents in an ElasticSearch instance. I'm trying to speed things up and at this point the one thing that's really holding me down is the need to make Mongo Calls for every document. Is there any way to bundle reads?...

  • Speeding Up Reads in Mongodb-user

  • Hi, I'm trying to read a large number of documents, something in the region of 4 million from a database that consists of product information. Here are the important fields relating to product in the document. product_id, product_name, merchant_id, merchant_name, category_id, category_name, merchant_product_id, brand_id, brand_name Currently searches are performed on either the merchant_id...

  • Combining Reads in Play-framework

  • I would like to combine several Reads together, but not individual Reads, I want to combine compound(?) Reads. For example: scala> val ab = (JsPath "a").read[Int] and (JsPath "b").read[Int] ab: play.api.libs.functional.FunctionalBuilder[play.api.libs.json.Reads]#CanBuild2[Int,Int] = [email protected] scala> val cd = (JsPath "c").read[Int] and (JsPath...

  • Reads Of Single Attr Case Class in Play-framework

  • Hi all I'm tring to read a json like: """ { "id":"123", "name":"123", "isDead":true, "weight":12.2 } """ But it just doesn't work case class Id(id:String) case class Creature( id: Id, name: String, isDead: Boolean, weight: Float ) implicit val idReads = ( (__ ).read[String] )(Id) implicit val creatureReads = ( (...

  • Play Reads - Difference Between (User) And (User.apply _) in Play-framework

  • When I have: implicit val userReads: Reads[User] = ( (__ Fields._id).readNullable[BSONObjectID] and (__ Fields.identityId).read[IdentityId] and (__ Fields.firstName).read[String] and (__ Fields.lastName).read[String] and (__ Fields.fullName).read[String] and (__ Fields.email).readNullable[String] and (__ Fields.avatarUrl).readNullable[String] and (...

  • High Disk Reads When Running 2 Innodb Databases On 1 Machine in Mysql-internals

  • Hi, I am running 2 databases on the same MySQL server on one machine. I am co-hosting 2 workloads (1 per database) and I switch periodically (every 5 mins) between them. When running this configuration, I notice that the disk usage is very high whenever I switch. This observation rules out the possibility that the databases are not optimized. Also, when running the 2 workloads, the memory...

  • Solr4 Cluster Setup For High Performance Reads in Lucene-solr-user

  • Hello, I am evaluating solr for indexing about 45M product catalog info. Catalog mainly contains title and description which takes most of the space (other attributes are brand, category, price, etc) The data is stored in cassandra and I am using datastax's solr (DSE 3.0.2) which handles incremental updates. The column family I am indexing is about 50GB in size and solr.data's size is about...

  • High Disk I/O During Reads in Cassandra-user

  • Hello, We've had a 5-node C* cluster (version 1.1.0) running for several months. Up until now we've mostly been writing data, but now we're starting to service more read traffic. We're seeing far more disk I/O to service these reads than I would have anticipated. The CF being queried consists of chat messages. Each row represents a conversation between two people. Each column represents a message...

  • Experiencing High Latency For Few Reads In HBase in Hbase-user

  • Hi, We are running a stress test in our 5 node cluster and we are getting the expected mean latency of 10ms. But we are seeing around 20 reads out of 25 million reads having latency more than 4 seconds. Can anyone provide the insight what we can do to meet below second SLA for each and every read? We observe the following things - 1. Reads are evenly distributed among 5 nodes. CPUs remain...

  • Tuning Cassandra 2.1 For High Writes And Immediate Reads in Cassandra-user

  • Hi, We are evaluating Cassandra 2.1 for our new production system. The following are the requirements: 1. 15K writes/sec with 5 KB blob in a single column of a column family, 2. This is followed by immediate Reads by multiple consumer threads, the read requires us to return entire Row and not only the recently updated column. 3. Around 1B unique keys. So I am assuming for the reads the...

  • Replication: Automatically Reassign High Volume Reads To Secondary in Mongodb-user

  • We are using MongoDB for the first time in our shop.  I am doing research on replication.  That is, if there are high volume of reads on the primary, does MongoDB automatically reassign some of the reads to the secondaries.  Could someone please be kind enough to explain. Thank you so much!...