Hi there,
I'm running lichess.org, a scala/play website; the entire source code can
be found on https://github.com/ornicar/lila.
The app serves 3 million pages a day, plays 600 chess moves per second, and
deals with 12k+ concurrent websocket connections.
It runs on a beefy server with `-Xms32g -Xmx32g
-XX:ReservedCodeCacheSize24m -XX:+UseG1GC`.
The JVM does well memory-wise; no observable
Hello!
I have an object that models a business flow, let's say BizFlowObj(int
BizFlowNumber).
I want to log whatever is happening in one flow to one log, and another flow
in another log (that means I want logging for an instance of a class and the
classes used by that instance in one log file, and another instance of that
class in a different file).
I tried like this:
private void initializeFlowBasedLogging
I have a linux server with three play 2 framework instances on it and I would like to execute regularly an external Scala script that has access to all application environment (models) and that is executed only once at a time.
I would like to call this script from crontab but I cannot find any documentation on how to do it. I know that we can schedule asynchronous tasks from Global object, but I
How to have a Play framework app autostart during boot on Elastic Beanstalk CentOS ec2 instances
(this is how I managed to do it, would love to hear how you've done it)
http://www.gubatron.com/blog/2013/09/27/how-to-have-a-play-framework-app-autostart-during-boot-on-elastic-beanstalk-centos-ec2-instances/
So you�ve created an Elastic Beanstalk environment, you have a play framework distribution
I am building a webapp that upon receiving an http post request needs to do
some encoding jobs.
Basically Play controller receives a media file as a request body.
Upon receiving this file it needs to pass it to an Akka Actor for further
encoding.
Akka Actor will invoke an external shell process via Java Process Builder.
However in order for system to not through out of memory errors I want
Hello,
We are building new system with Play 2.2 framework, we are using the actor model.
I am willing to do like the example in book "AKKA essentials" chapter 11, (Actors and web application) which uses Play! for this.
The idea is that it is a REST API, the controller then "ask" an actor for a response, then reply back.
The question is how many actor instances I should create? is this only one for
Hi. I looked at the documentation here� http://www.playframework.org/documentation/2.0.2/HTTPServer�which says multiple instances can be deployed with the following example:
Let�s start the same Play application two times: one on port 9999 and one on port 9998.
$ start -Dhttp.port=9998
$ start -Dhttp.port=9999
After starting the first instance, I get the error message "The application is already
Hi all,
maybe someone could be interested in this.
I have created a suite of Docker images, dockerfiles and bash scripts
useful to deploy a Zookeeper ensemble with 3 or more instances and a
SolrCloud (v. 4 or 5) cluster. SolrCloud 4 cluster is based on Tomcat 7.
https://github.com/freedev/solrcloud-zookeeper-docker
This could be interesting for applications that needs a zookeeper/solrcloud
* The S3Client could potentially leak connections due to https://forums.aws.amazon.com/message.jspa?messageID=296676
* Issue 26: Make sure default prefixes don't use underscore as it's reserved
* Include activity class name in log message
* Issue 28: log4j.properties wasn't being written correctly. Thanks to user "mgarski".
* Use the latest Curator version
* Lots of fixes/tweaks for automatic
hi,
is there a way to prevent the start script from creating the "RUNNING_PID" file or is there a option to use a different name.
in my case I have a upstart script that should start multiple play instances which are balanced behind a haproxy.
my problem is, that these upstart scripts call the start-script with only a different portnumber, but since there may be a running instance from the previous
Hi,
Is there any way to have more than one play applications running and sharing work load via akka remoting?
I have tried the following in my configurations, however, there does not seem to be any communication between the play instances:
...
deployment {
� � � � � � � � /actions {
� � � � � � � � � � router = round-robin
� � � � � � � � � � nr-of-instances = 5
� � � � � � � � � � target {
� � � � � � � � � �
Is this the best I can do without writing my own Reads[ObjectId]/Writes[ObjectId] implementations?
� �val objectId = new ObjectId(jsString.as[String])
� �val jsString = Json.toJson(objectId.toString)
Is this a question for mongodb-casbah-users?
mj Fri Aug 23 06:48:47 2002 EDT
Modified files:
/pear/Log/Log composite.php
Log:
* Fix bug #18310.
Index: pear/Log/Log/composite.php
diff -u pear/Log/Log/composite.php:1.4 pear/Log/Log/composite.php:1.5
+++ pear/Log/Log/composite.php Fri Aug 23 06:48:47 2002
@@ -1,5 +1,5 @@
I have two Hadoop instances running on one cluster of machines for the purpose of upgrading.
I'm trying to copy all the files from the old instance to the new one but have been having
trouble with both distcp and fs -cp.
Most recently, I've been trying, "sudo -u hdfs ./hadoop fs -cp hftp://mc00001:50070/* hdfs://mc00000:55310/"
where mc00001 is the namenode of old hadoop and mc00000 is the namenode
Hi all,
Is Tomcat supposed to create multiple instances of my servlet at startup? Or
is it a bug?
Background:
=======I have two different domains connected to my Tomcat server (version 5.5.17):
http://mydomain1.host.com and
http://mydomain2.host.com
"mydomain1" is directed to /home/web/tomcat/webapps/ROOT/
and "mydomain2" is directed to /home/web/tomcat/webapps/mydomain2/
Both domains use a copy
Could you please suggest some appropriate AKKA clustering mechanism for the
AKKA actor system ( description of which is given right after) used in
conjunction with play framework.
*Description of our system.*
We have several AKKA actors(different actors for database interaction, XMPP
interaction and some for supervision), which are fired once we get a
web-service request
and it
I have an application running on multiple EC2 instances. I need to
aggregate the logs generated in those instances to a central location.
i.e., the logs generated on the EC2 instances automatically become
available on the centralized server. I have logs generated by my
application and many 3rd part applications, system logs such as syslog,
secure log etc.
I have following questions:
1. Can Flume
At my company we have Tomcat 5.5 instances running lots of webapps, and
problems with those apps interfering with each other's log files --
especially in instances of libraries like hibernate.
I've read and understand the principles of using log4j respositories,
such as Ceki G�lc�'s document here ,
but I get the impression that a component-based RepositorySelector is
not available in Tomcat.
[ https://issues.apache.org/jira/browse/HBASE-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jimmy Xiang updated HBASE-5136:
-------------------------------
Attachment: 5136-trunk.patch
> Redundant MonitoredTask instances in case of distributed log splitting retry
> ----------------------------------------------------------------------------
>
>
[ https://issues.apache.org/jira/browse/HBASE-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484496#comment-13484496
]
Sergey Shelukhin commented on HBASE-5136:
-----------------------------------------
Hi. Should this be ok to commit?
> Redundant MonitoredTask instances in case of distributed log splitting retry
> ----------------------------------------------------------------------------
>
>