Thanks a lot for all the answers and suggestions.
Looks like one hacky workaround is to check the hadoop task status. But for
my project it's way too much cost.
On Mon, Sep 17, 2012 at 8:31 AM, Bennie Schut wrote:
> The jdbc driver uses thrift so if thrift can't then jdbc can't.
> This can be surprisingly difficult to do. Hive can split a query into x
> hadoop jobs and some will run in parallel and some will run in sequence.
> I've used oracle in the past (10 and 11) and I could also never find out
> how long a large job would take, which leads me to suspect it's not a
> trivial thing to do.