I really need some advice from experts in MongoDB.
Will it be a good idea to store user uploaded videos in mongoDB gridFS?
Here is the use case:
Writing (less frequent):
User uploaded videos will be processed and 2 versions .MP4 and .webm will be saved.
Video file can go 50MB in length for now and will increase in future.
Reading (most frequent):
Users will be streaming these videos from the site
We are going to remove about 80% of the records from our 2 largest collections. The records to be deleted are evenly spread through out the collections.
Should we compact the database after we delete the records to improve the memory requirements of the database? Most of the remaining records are being read randomly fairly frequently and updated less frequently.
MMS states we are running at about
I'm trying to decide if this is a good idea or not. If I have a table with
usernames, passwords, and IP addresses I can look at the user's ip address
with $REMOTE_ADDR, see if that matches what's in the database and allow
access to the page and interact with the database (the page I'm wanting to
protect is the page that I use to edit the contents of tables) but if the IP
address does not match
I have a very large MS Database system (lots of tables that aren't too big) broken into multiple front end and backend MDBs.
Eventually Access is not going to perform satisfactorily so I need to upsize.
I only want to maintain one front end system (Access or VB probably) which can attach to two backend systems: 1. Access MDBs (they work great for our smaller clients) and 2. MySQL
Is this
Hello,
I'm using LucidWorks which use solr for indexing my mongodb database and
there is an option for using the oplog.rs collection in order to know
what's new in it and indexing only the delta and not all the database.
But I think there is only one oplog.rs collection for all the database, is
it right? If yes, that means, it's necessary to create indexes to make the
research by database
I sent a reply back to JW about his problem on performance tuning and came
up with I thought a really good idea. But there weren't any bites so I
thought I start it on a new thread.
There are a lot of threads about setting up MySQL for the best
performance. This might help to solve the problem.
What I'd like to see is a web page that has a MySQL/InnoDb configuration
calculator where
First time here. I need to subcribe to a channel and save everything
that comes through. Since i can't issue any set or get while being
subscribed to a channel i wonder if it's a good idea to create another
connection just for writing.
Best.
Exposing your entire controller to your views is bad practice IMO. $scope is a great way to mimic a presentation model/view model/whatever you want to call it while keeping the controller itself focused on logic. Especially considering the fact that exposing the whole controller was already possible via $scope.ctrl = this, I really don't get why this was added. Its existence will be seen by many devs
Hi,
Interpolation is a great concept for providing content in a dynamic way. The drawback are some significant performance limitations if you overuse the feature in your views.�
Why: because every interpolation will register a watch expression on the current scope to take part in the $digest cycles.�
So for not static content which will be provided through angular mechanism I wanted to get rid of
In many cases the type of table exist in application which have many
lookups on them (for example used in many joins) and are often relatively
small and can be easely located in memory to accure best perfomance. Whey
could be created as heap tables but the problem is every application
modifying them should be patched to do updates in both heap table and it's
disk based copy. Additionaly you
Hi,
I plan to use MongoDB and Ruby on Rails in Ecommerce. It will use
PayPal express checkout. Do you guys think MongoDB support Ecommerce?
Thanks,
Julie
We have a database that is partially restored every day with mongorestore - some collections are dumped and re-created. We've seen a lot of file growth because of that, so in order to avoid it we came up with running db.repairDatabase() every time after the restore. That worked well. We did this via the mongo ruby driver, but now the database has grown and we're starting to timeout.
We're trying
Hello all,
I am using Solr Cloud today and I have the following need:
- My queries focus on counting how many users attend to some criteria.
So my main document is "user" (parent table)
- Each user can access several web pages (a child table) and each web
page might have several attributes.
- I need to lookup for users where there is some page accessed by them
which
I believe having mongodb integrated into a mail server could be pretty
nice. I'd imagine something like
mail server + mongodb + gridfs
I run a postfix server. I don't know the internal of postfix but it
seems enough modular to add mongodb support. Not sure yet how gridfs
could be used instead of file system.
It could make it easy to deploy a huge mailing service. I think the
idea is awesome
We have a number of queries that produce good results based on the textual
data, but are contextually wrong (for example, an "SSD hard drive" search
matches the music album "SSD hip hop drives us crazy".
Textually a fair match, but SSD is a term that strongly relates to technical
documents.
We'd like to be able to direct this query more strictly in the direction of
the technical documents
Hi all,
I have a situation where it would be useful to extend scala.Tuple2. I
want to be able to do something like
val (x,y) = bla
and pass the object to methods expecting a Tuple2, but nevertheless
want to be able to define methods like flatMap on the new type.
Is it a good idea to just extend scala.Tuple2? Or should I try to get
the same behavior using implicit conversions (I haven't
I'm digging into directives, and I want to write them in a form that is reusable across projects, and as flexible as possible.
I've come up with the following syntax. Is there a better or more preferred way to do this?
function myCustomDirective(app,options) {
��� options = angular.extend({
��� ��� 'name':'myCustomDirective',
������� 'my_option':true,
��� ��� 'priority':9999
��� },options);
���
I was recently given charge of enhancing our web server. One of the changes I've recommended is going to apache.
The problem is this school wants to put all their course outlines into this database and have web pages built based on the input. This is great but some of these outlines are huge. Wouldn't this cause problems because of all the queries done?
My solution is to create a cron job to build
Dear Sirs,
I have a web site which is keeping user data in mysql. I am afraiding that
to collapse of existing server without my control.
I want to use another server to keep in standby and I want to set new server
DNS as a third and forth server.
As far as I know if primary and secondary server will not work, internic
divert to third and forth server.
But I need to keep new server mysql
I've been working on a project that was started from scratch. I'm a
minimalist, so I like to keep things as simple as possible. I've been
using this idea for a database abstraction, and I thought I'd see if I
could get some constructive criticism.
Here is an example of how I use this first:
$table = 'customers';
// validation is handled before this
// the class also uses PDO, so