g*******g 发帖数: 108 | 1 below is posted in google group.
Chinese in North America Oracle User Group (CINAOUG)
com>
============ start the post ==========================================
RAC is like an amplifier, it multiplies both good or bad.
RAC pretty much turns a chunk of application memory requests into network
access requests, so all those access latency will be slower, and the
frontend application will be impacted.
RAC also does nothing to improve IO throughout, it only creates more access
requests to the storage. If your application is IO bound, RAC makes it
worse in theory.
a well tuned application scales well in RAC.
an application with IO, buffer cache contention,in general, behaves badly in
RAC.
Running application in RAC has a much higher performance tuning requirement.
It is ironic that we use RAC to increase capacity, performance and
availability but it is also a performance trap.
Here are couple tuning tip on RAC.( a lot of people would disagree with me
on this, all my tips have cons)
1) enable jumbo frame on the interconnect, this is hardware specific, not
all switches supports this.
2) look into hot blocks, this has greater impact to an app in RAC than a
single instance environment. some technics are "cache a frequently used
sequence number." "Use reverse index to spread out index data block".
DBMS_APPLICATION_INFO is a very useful tool in both single/RAC setting. In a
connection pool client/server setup, it is the best tool to trace an
application. I personally think it is the only meaningful way to trace an
app.
Any inputs on this are welcome. | g*******g 发帖数: 108 | 2 discussion at c*****[email protected]
I get my thoughts on RAC based on my experience, I do not remember where I
get the anology "amplifier". I doubt it is from Tom Kyte, he rarely
criticizes an Oralce product, maybe he did/will, i can be hold responsible
for his words :)
Let's focus on the tech side now.
I characterize primary feather of OLTP is "use interaction", queries run
fast and expect few records. Repeatedly used data are cached in the data
buffer.(typically)
In a sequential read wait event, Oracle send an index data block read
request to the storage subsystem. What are the bottleneck in this operation?
1)CPU(Oracle server)--- send request to --
2) the request travel on data bus (HDA/fibre channel/SCSI etc )
3) Storage processor receives and process the request (raid10/raid5/prefetch
algorithm etc) and send data back.
Typically,
case 1, if the sequential read requests are below the storage system's
throughput, the bottleneck is storage process latency(CPU cycle, alrightm to
handle an IO request).
case 2, if the sequential read and other IO request are above storage
thorughtput, the bottleneck is queue precess time , this is what i called IO
bounded system .
RAC handles the second case poorly. It is just like morning rush hour
traffic through Holland tunnel to Manhattan, you added another 16 more lanes
without widen Holland tunnel will not make things better.
I am not a java expert, but here is a link http://stackoverflow.com/questions/53379/using-dbms-application-info-with-jboss java code using DBMS_APPLICATION_INFO.set_module | g*******g 发帖数: 108 | 3 discussion at google group: c*****[email protected]
===============================================================
Scenario -1. I think you are meant IO latency? RAC will help you in this
case.
My thought on RAC in this scenario is that, RAC improves application
throughput, reduces application latency. You have to figure out/determine
what is important to you. I assume CPU, other resources are not the
bottleneck.
Scenario -2. Yes, RAC will guide against a node failure.
As fetching data blocks from another node, this does not
mean it would be faster. Oracle has to go thorough NETWORK to get the data,
and network latency (not capacity) is slower than a FibreChannel data access
. This is defined by network protocal (TCP).
Application partition is important in a RAC setup, it will
reduce "fetching data for other node's memory which slows down the app.
Check out "cache fusion " it is the algorithm RAC uses to do data fetching.
A side node:
You do not want each node's load at 50%, if both are at 50%, failover
occurs the remaining node would be at 100% which you do not want to see.
====================================================================
Suppose I have a single-instance
db which falls into your case 2. Now I am turing it into a RAC by
adding another same node.
Scenario -1 My purpose is to handle 2x workload. I probably can not
achive my goal in this case without improve my I/O throughput.
Scenario -2 My main purpose is to achive high-availablity. So each
node now handle 50% workload, so the total I/O requests remain same,
additionally blocks can be fetched from another node's buffer cache,
not neccessarily go to disk everytime. In addtion, I may partition my
work load, like seperate OLTP and reporting into different node etc.
I imagine in this case, I can achive my goal, even can improve overall
performance. | v*****r 发帖数: 1119 | 4 In reality, RAC performance might have nothing to do with how good the
application is designed and coded. Thinking about this scenario, if ad-hoc
queries are allowed on a well-designed RAC application, that might just crew
up the system if load-balancing is enabled.
Application partition, 嗯嗯,that just implies disabling LB in many cases;
and one of the excuses I used to explain to our clients why I disabled LB
for their production system is, guess what, application partition.
application
not
【在 g*******g 的大作中提到】 : discussion at google group: c*****[email protected] : =============================================================== : Scenario -1. I think you are meant IO latency? RAC will help you in this : case. : My thought on RAC in this scenario is that, RAC improves application : throughput, reduces application latency. You have to figure out/determine : what is important to you. I assume CPU, other resources are not the : bottleneck. : Scenario -2. Yes, RAC will guide against a node failure. : As fetching data blocks from another node, this does not
|
|