Have any of the maintainers considered using JMX or Phoenix for enhancing manageability of JMeter?
Last fall JavaWorld had an article with some source code on getting CPU% statistics. It would seem to be relatively easy to put it into a JMX bean and deploy it on the server JVM (assuming it supported JMX). That way JMeter could record how hard it was making the server process work.
The CPU stuff
Hi all,
The Apache Phoenix project now provides a custom sink for streaming
Flume events into HBase. These events may be queried through SQL using the
Phoenix JDBC driver.
The detailed instructions can be found here (still on github until we
move to Apache):
https://github.com/forcedotcom/phoenix/wiki/Apache-Flume-Plugin.
Regards
Ravi
Hi,
I am trying to setup Phoenix and test queries on Hbase. But getting below error. Any clue what might be the issue. I have added core jar to classpath in hbase region servers by using dynamic loading of jars setting in hbase-site.xml. Also added phoenix client jar at client side.
Getting same error with sqlline aswell.
./performance.py testhost.gs.com 1000000
Phoenix Performance Evaluation
Hi Experts,
Could anybody help me out to install phoenix on top of hbase ? I am using cdh4.5 with cm4.8.
--
Thanks,
Kishore.
The Phoenix team is pleased to announce that Phoenix[1] has been accepted
as an Apache incubator project[2]. Over the next several weeks, we'll move
everything over to Apache and work toward our first release.
Happy to be part of the extended family.
Regards,
James
[1] https://github.com/forcedotcom/phoenix
[2] http://incubator.apache.org/projects/phoenix.html
Hi all,
I've read a lot of good things about Phoenix here and I have a few
questions that maybe some of you, who already use Phoenix, can help me with:
How does Phoenix handle pre-existing data (before it was deployed) ?
Does the deployment require HBase restart or just RegionServers restart ?
How does Phoenix handle values that are data blobs - say my value is not an
Integer but a Writable
I just read about Phoenix project. People from Phoenix talks very well
about it, the question is,,, they give you SQL and it's suppose pretty
fast.. any case where is it better to use just HBase without Phoenix?
it is in general faster than execute native scans and make your own
coprocessor?
Hi,
I am in dilemma to go ahead with Hive or phoenix.
My requirement is achieving "Low Latency" for the query.
I went through both one with suitable records of data and I got following output.
1. Hive ran all aggregation query with help of MR and took around 5 Mins (approximately) with 1 Lakh of data.(With join, where clause, AND).
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.3 release. Highlights include:
- functional indexes [1]
- map-reduce over Phoenix tables [2]
- cross join support [3]
- query hint to force index usage [4]
- set HBase properties through ALTER TABLE
- ISO-8601 date format support on input
- RAND built-in for random number generation
- ANSI SQL date/time literals
Hi CDH users,
We are happy to announce the inclusion of Apache Phoenix in Cloudera Labs
.
Phoenix, initially designed and open sourced by Salesforce.com, is an
efficient SQL skin for Apache HBase. The blog post referenced below briefly
introduces Phoenix and explains some of its unique features. It also covers
some use cases and compares Phoenix to existing solutions such as Hive and