Greetings,
I'm using the 2.4.x kernel, with the ext3 filesystem. I've talked to a
bunch of people and everyone says 'apache doesnt have a file size limit' but
they sure do! I have a 3 gig file (don't ask, why.. just go with the flow)
and it gives me a permission deined error, when I try to view it. It also
says in the error_log
[Sun Mar 17 15:31:50 2002] [error] [client x.x.x.x] (75)Value too
Hi all,
My system reports:
Open_tables 512
Opened_tables 24,429
The docs say that if the latter is high I should increase the table
cache size. (currently at 512)
How does one decide what size to increase it to? And is there a problem
with one of the applications that's making this figure so high? Or is
this normal behaviour?
OS: RH9
Dual 2.4 Xeon
1 GIG RAM
(btw, this kind
Hey,
Could someone give me some pointers how to use custom key-value
serialization to store arbitrary data types? Maybe there is a documentation
I am trying to find a Java client that supports this in a straight-forward
way. Default Java serialization would also be fine.
Thanks,
Gyula
You received this message because you are subscribed to the Google Groups "project-voldemort"
Apache/2.0.54 on Debian Stable.
The file is 2.3GB (2,398,513,344) so I assume that the file is just
too large for this build of Apache.
In the log file I get:
[Fri Sep 29 10:34:38 2006] [error] (75)Value too large for defined data type: access to /training/webcasts/webcast_data/161/webcast.mov failed
But, there's a few odd things that happen I'm not clear on:
1) Google for that error message turns
Hi, how can I skip some value when build a ReadOnly store.
In the mapper , I override makeKey and makeVlaue(),
what to do if I don't want current value in the store output?
public Object makeKey(NullWritable key, ProfileWritable value) {
return null???
}
return null seem create an exception.
Thanks,
You received this message because you are subscribed to the Google Groups "project-voldemort
Hi all,
In one of our customers databases there is table that have an
extremely high max_data_length of about 256TB(!). Another have 1 TB,
while the rest is at 4GB. (all of these tables are basically the same,
but with these two as the ones withthe most data). The largest table
is currently at about 8.7 GB.
Is there any practical consequences of having such overly large
max_table_length
My configuration
mysql 4.0.16
myodbc 3.51
My application is developed using visual c/c++ 6.0 and MFC
I have a table that has an autoincrement primary key. I'm using the
CRecorset classes to perform all data manipulation after the tables are
created with the mysql client. At what appears to be a random occurance,
when I insert a new record, the autoincrement value gets set to a very
0
I am writing a record, which has some array elements in it. In one of the
records the number of entries for the array are around 600, in that case
when i append a value in get an error "Value too large for file block size".
And the worst part is that, once i hit this error for records after that i
get the same error.
I am using avro-c, the error comes while calling the routine
"avro_file_writer_append_value
Hi,
Previously we are using ActiveMQConnectionFactory alone, and we observed
that while starting the JBoss Server ActiveMQ is creating Multiple Threads
and closing( ActiveMQ Task 1, ActiveMQ Task 2,........), we thought it
creating Multiple Connections/Session are creating and closing.
As a resolution, we want to use CachingConnectionFactory, to configure
CachingConnectionFactory we need