suppose you want to create a table for every user...
when the directory where the files(tables) get stored ie:
(/var/lib/mysql/db/etc) is full for your system,
will the next table entry result in a mysql crash?
will mysql create a subdirectory and store more tables in it and keep going?
Reply-Mick Hanna writes:
> suppose you want to create a table for every user...
>
> when the directory where the files(tables) get stored ie:
> (/var/lib/mysql/db/etc) is full for your system,
> will the next table entry result in a mysql crash?
> will mysql create a subdirectory and store more tables in it and keep going?
>
Hello Mick,
MySQL won't necessarily crash, it just
Hi,
I'm currently trying to configure syslog4j (which is using a
log4j.properties file) to carry out logging on my system. The properties
file is below:
log4j.rootLogger=DEBUG, syslog
log4j.category.EventLogger=INFO, syslogAudit
log4j.additivity.EventLogger=false
log4j.appender.syslog=com.org.CustomAppender
log4j.appender.syslog.protocol=tcp
log4j.appender.syslog.host=localhost
log4j.appender.
Hi
How can we find out the threshold for any camel route ?
I mean how to find the maximum number of messages a particular route can
process and it can't take
any more messages.
I came across camel throttle , if the threshold is reached the new messages
will be saved somewhere for processing ?
Regards
kiran Reddy
View this message in context: http://camel.465427.n5.nabble.com/camel
Hi all:
I have been reading mysql 5.1 documentation and it says that default 50%
threshold for natural language searches can be changed in
storage/myisam/ftdefs.h but, what I have to change to allow indexing of
all words?
This percentage is not explicit in the file, so I don't know what I have
to change.
Thanks in advance,
Mario Barcala
Hi all,
https://cwiki.apache.org/MAHOUT/dirichlet-process-clustering.html
According to this page, it can specify threshold to Dirichlet Driver.
This page explain that threshold of 0 will emit all clusters with their
associated probabilities for each vector.
So, I've run Dirichlet Clustering using threshold 0.
But, clusteredPoints/part-m-00000 sequence file is empty( length is 120
byte).
Hi
Why in recommender the threshold is considered the user’s average
preferences value plus one standard deviation ?
Can we asssume that the good recommendations are anything above the
user's average preferences?
Many thanks
Just out of curiosity. Is there a threshold limitation for canopy
algorithm? Is it just defined by the user's preference based on the
inter-cluster distances? or perhaps it is just limited by how much memory
allowed to execute them?
Just out of curiosity. Is there a threshold limitation for canopy
algorithm? Is it just defined by the user's preference based on the
inter-cluster distances? or perhaps it is just limited by how much memory
allowed to execute them?
Hi ,
If I have search results around 100 results. I want to impose a threshold to
get results only from 20 to 30.
So that I can show these ten results to the customer.
Sreenivas A.
--
View this message in context: http://lucene.472066.n3.nabble.com/solr-search-results-threshold-tp4014428.html
Sent from the Solr - User mailing list archive at Nabble.com.
I plan to setup HDFS on 20 servers. I plan on using /fs for my data. I
want /fs to have atleast 20% free therefore I don't want this
filesystem to be filled up with hdfs data. Is there a way to restrict
hadoop so it does not write here if the filesystem is over 80%?
TIA
Hi All,
I am using LinearRegression and have a question about the details on model.predict method. Basically it is predicting variable y given an input vector x. However, can someone point me to the documentation about what is the threshold used in the predict method? Can that be changed ? I am assuming that i/p vector essentially gets mapped to a number and is compared against a threshold value
Hi , when selecting Threshold-based neighborhood, as the threshold
increase the precision increase which makes sense. However, the
getReach max provide recommendations for 0.2 users and decrease to
0.0002 , is that normal? The recall also drops. When using a
fixed-size neighborhood getReach provide much higher results.
//=== Code used ==UserNeighborhood neighborhood =new
ThresholdUserNeighborhood
I am seeing a small standalone cluster (master, slave) hang when I reach a certain memory threshold, but I cannot detect how to configure memory to avoid this.
I added memory by configuring SPARK_DAEMON_MEMORY=2G and I can see this allocated, but it does not help.
The reduce is by key to get the counts by key:
rdd = sc.parallelize(self.phrases)
# do a distributed count using reduceByKey
Hi,
I'm wondering if anyone has solutions about the nonstopped safe mode, any
way to get it around?
thanks,
error: org.apache.hadoop.dfs.SafeModeException: Cannot delete
/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.4696 has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
I've figured out a temp workaround for the problem/feature of words that
appear in more than 50% of records in a fulltext index being considered
stopwords. I just added as many dummy records as there are real records
in the table. A fulltext search will now not disregard any words based
For performance I added a column called dummy with a flag set indicating
if the record is real or dummy.
I am new to Zookeeper. Couple of questions.
What is the threshold before ZK takes a snapshot?Is there a way to force ZK to snapshot for
troubleshooting or for any Operational reasons?
Thanks,Challa
Hi,
is there some way of limiting the results above some fixed threshold?
thanks in anticipation
-umar
Cassandra rotates system.log when it reaches 20MB. We see that old logs
are kept for over a month. Is Cassandra going to delete or compress these
logs when a certain threshold is reached or are we supposed to do it
ourselves?
Hiya
My query below, looks for tables with 10% freespace but also the space
is greater than 100K.
mysql> SHOW TABLE STATUS WHERE Data_free / Data_length > 0.1 AND
Data_free > 102400 \G;
*************************** 1. row ***************************
Name: bayes_words
Engine: MyISAM
Version: 10
Row_format: Dynamic
Rows: 97134
Avg_row_length