eygle.com   eygle.com
eygle.com eygle
eygle.com  
 
Digest Net: July 2007 Archives

July 2007 Archives

vmstat 命令的用法说明

用途

报告虚拟内存统计信息。

语法

vmstat [ -f ] [ -i ] [ -s ] [ -I ] [ -t ] [ -v ] [ PhysicalVolume ... ] [ Interval [ Count ] ]

描述

vmstat 命令报告关于内核线程、虚拟内存、磁盘、陷阱和 CPU 活动的统计信息。由 vmstat 命令生成的报告可以用于平衡系统负载活动。系统范围内的这些统计信息(所有的处理器中)都计算出以百分比表示的平均值,或者计算其总和。

如果调用 vmstat 命令时不带标志,则报告包含系统启动后虚拟内存活动的摘要。如果指定 -f 标志,则 vmstat 命令报告自从系统启动后派生的数量。PhysicalVolume 参数指定物理卷的名称。

Interval 参数指定每个报告之间的时间量(以秒计)。第一个报告包含系统启动后时间的统计信息。后续报告包含自从前一个报告起的时间间隔过程中所收集的统计信息。如果没有指定 Interval 参数,vmstat 命令生成单个报告然后退出。Count 参数只能和 Interval 参数一起指定。如果指定了 Count 参数,其值决定生成的报告数目和相互间隔的秒数。如果 Interval 参数被指定而没有 Count 参数,则连续生成报告。Count 参数不允许为 0。

在 AIX 4.3.3 及更新版本包含有此方法的增强,用于计算 CPU 等待磁盘 I/O 所花时间(wio 时间)的百分比。某些情况下,AIX 4.3.2 以及该操作系统更早的版本中使用的该方法在 SMP 上会给出夸张的 wio 时间报告。

AIX 4.3.2 和更早版本中使用的方法如下:在每个处理器的每一次时钟中断(每个处理器一秒钟 100 次),确定上一个 10 毫秒时间要归入四种类别(usr/sys/wio/idle)中的哪一个。如果在时钟中断的时候,CPU 正忙于 usr 方式,那么 usr 将获取该时钟周期添加到其类别中。如果在时钟中断的时候,CPU 正忙于内核方式,那么 sys 类别获取该时钟周期。如果 CPU 不忙的话,则检测是否有磁盘 I/O 正在进行。如果有任何正在进行的磁盘 I/ O,则累加 wio 类别。如果没有磁盘 I/O 正在进行且 CPU 不忙,则 idle 类别获取该时钟周期。由于所有的空闲 CPU 都被归入 wio 类别,而不管正在等待 I/O 的线程数量,所以会产生夸大的 wio 时间报告。例如,只有一个 I/O 线程的系统可能会报告 90% 以上的 wio 时间,而不管它拥有的 CPU 数量。sar%wio)、vmstatwa)和 iostat% iowait)命令报告 wio 时间。

原文链接:http://www.ibm.com/developerworks/cn/aix/library/nmon_analyser/

 



用法说明:nmon_analyser 工具并未受到正式的支持。没有提供或隐含任何保证,并且您无法从 IBM 获取相关的帮助。

该工具目前以 Microsoft® Excel™ 2000 或更高版本的电子表格的形式提供。

nmon_analyser 工具设计用于最新版本的 nmon,但出于向后兼容性的考虑,也使用旧版本对其进行了测试。每当在对 nmon 进行更新时,同时也将对该工具进行更新,另外,还可以不定期地更新该工具以获得新的功能。要将您的名字放入到请求更新的电子邮件列表中,请与 Stephen Atkins 联系。

该工具的作用

nmon_analyser 工具可以帮助对通过 nmon 性能工具捕获的性能数据进行分析。它允许性能专家完成下列任务:

  • 以电子表格的形式查看相应的数据
  • 消除'错误的'数据
  • 生成向客户进行演示的图形
该工具还将为输出中的每个主要部分自动地生成相应的图形。

另外,该工具将对 nmon 数据进行分析以生成下列信息:

  • 热点分析的加权平均值的计算
  • 用处理器与收集间隔的比值表示的 CPU 使用率的分布情况,该信息有助于识别单线程的进程
  • IBM TotalStorage® Enterprise Storage Server (ESS) vpaths 的附加部分显示了每日各时段的设备繁忙状态、读取传输大小和写入传输大小
  • 每日各时段的系统总数据速率,并对其进行调整以排除对 EMC hdiskpower 设备的重复计算,该信息有助于识别 I/O 子系统和 SAN(存储局域网络)瓶颈
  • EMC Corporation (EMC) hdiskpower 和 ESS DS8000(以前的 FAStT)dac 设备独立的工作表
  • 分析内存使用率,以显示计算性和非计算性页面之间的差异
  • 每个网络适配器的每日各时段总数据速率
  • 显示每条命令的平均 CPU 和内存使用率的 TOP 部分汇总数据





回页首


新的特性

nmon_analyser 工具的新特性包含:

  • 支持 AIX® 5.3 和微分区 (NMON10)
  • 支持超过 65K 行的输入文件
  • 改进的图形规模调整和定位
  • 用来指定要分析的工作表的选项
  • 支持带有自动分页功能的打印显示
  • 自动化的 Web 发布,采用 PNG 或 GIF 格式





回页首


安装该工具

  • 该工具以一个 .zip 文件的形式分发,其中包含了 .xls 文件、全面的用户文档、示例输入文件、辅助进行国家语言转换的 Shell 脚本以及用于分割大型输入文件的 Perl 程序。安装过程中只需要将这个包解压缩到适当的目录即可。




回页首


获取该工具

下面是可用的下载选择:





回页首


输出示例


主图表显示了收集间隔期间 CPU 和 I/O 的使用率:


可选图表显示了 vpath 服务时间:





回页首


文档

该分发版中包含了内容全面的文档。可使用 Microsoft Word 打开该文档,其中包括了下列细节内容:如何从 nmon 收集数据、如何使用该分析程序、国家语言问题、对数据进行解释的指南以及对 nmon 所生成的所有字段的详细解释。



参考资料

作者:Piner

原文:http://www.ixdba.com/html/y2007/m06/128-oracle-memory-patch.html

 

 

这个问题最早应当是gototop发现的,那已经是很多年以前的事情了,不过,一直到现在,这个问题其实没有最终解决,所以,这个补丁还是不得不一直打下去。bug描述:

    #  Bugs resolved by this patch in conjunction with APAR IY49415:
    #  -------------------------------------------------------------
    #  3028673:  ORACLE ON AIX DOES NOT SHARE MANY CONST STRUCTS - PER
    #            PROCESS MEMORY OVERHEAD

更详细的信息可以参考metalink Note:259983.1,其实在早先的aix 4.3以及5.1上,当时的解决方案是:

$AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE

$NUM_SPAREVP=1; export NUM_SPAREVP

但是,从aix 5.2开始,以上解决方案不再有效,所以,为了解决这个问题,aix 5.2推出了一个APAR IY49415,在aix 5.3中是查不到这个APAR信息的,但是不表示aix 5.3不支持,而是因为aix 5.3已经完全包含进去了。

AIX 5.2#instfix -a -ivk IY49415

    IY49415 Abstract: read-only reloc linking/loading support 

    IY49415 Symptom Text:
     Programs having large amounts of read-only address constants
     (compared to our competitors binaries of the same programs),
     consume excessive amounts of memory under AIX since it has no
     support to place address constants in read-only memory (text).

    ----------------------------
        Fileset bos.64bit:5.2.0.12 is applied on the system.
        Fileset bos.mp:5.2.0.18 is applied on the system.
        Fileset bos.mp64:5.2.0.18 is applied on the system.
        Fileset bos.rte.bind_cmds:5.2.0.13 is applied on the system.
        Fileset bos.up:5.2.0.18 is applied on the system.
        All filesets for IY49415 were found.

在有该APAR的aix 5.2系统上,或者所有的aix 5.3系统上,oracle推出了一个patch 3028673。打这个patch的方法也很奇怪,不是传统的Opatch方式去打,而是重新relink一个新的Oracle可执行文件,最终目的是通过直接修改源文件,使得oracle的多个进程之间可以共享一部分原来不能共享的资源,这部分资源大致占用1M多的空间。所以,该patch可以使得每个oracle进程降低1M多一点的内存使用,如果在进程特别多,而内存又比较紧张的系统上,这个patch还是非常有效果的。relink的方法为:

    Relink the oracle binary
    ~~~~~~~~~~~~~~~~~~~~~~~~

     1  save your current version of $ORACLE_HOME/oracle
     2  create a working directory $ORACLE_HOME/relink
     3  cd to $ORACLE_HOME/relink
     4  unzip the relinking package
     5  link $ORACLE_HOME/bin/oracle to ./oracle0
     6  run the script ./genscript to generate some required files and scripts
     7  run ./relink.sh to generate the new oracle binary oracle0.new.$$
     8  copy oracle0.new.$$ to $ORACLE_HOME/bin/oracle and verify that the
     permissions match the original oracle binary.

另外,需要特别注意的是,patch中描述为

    #  Patch Special Instructions
    #  ---------------------------
    #  This patch is for AIX 5.2 systems only.
    #
    #  It is valid for all 920* AIX 5.2 systems.

这个是因为当时还没有aix 5.3系统,其实,实际上所有oracle 920系统+aix 5.2/5.3都可以使用这个patch,以下是使用前后的对照表,注意SIZE列,补丁前与补丁后基本相差1M多,如果有1000个进程,就可以节约1-2G的内存使用。

    补丁前$ps gx|grep oracle
        PID     TTY STAT  TIME PGIN  SIZE   RSS   LIM  TSIZ   TRS %CPU %MEM COMMAND
     483436      - A    199:48   12  4804 25876    xx 49801 21136  0.5  0.0 oracletb
     602170      - A    198:55    0  4804 25940    xx 49801 21136  0.5  0.0 oracletb
     610420      - A    209:55    6  4844 25980    xx 49801 21136  0.5  0.0 oracletb
     630988      - A    145:18    9  4860 25932    xx 49801 21136  0.4  0.0 oracletb
     639154      - A    199:59   10  4828 25900    xx 49801 21136  0.5  0.0 oracletb
     643276      - A    191:42    4  4792 25864    xx 49801 21136  0.5  0.0 oracletb
     651494      - A    193:13    6  4844 25916    xx 49801 21136  0.5  0.0 oracletb
     671756      - A    204:38   10  4776 25848    xx 49801 21136  0.5  0.0 oracletb
     ......

    补丁后$ps gv|grep oracle
        PID    TTY STAT  TIME PGIN  SIZE   RSS   LIM  TSIZ   TRS %CPU %MEM COMMAND
     639170      - A     0:01    3  3036 77100    xx 50917 74064  0.1  0.0 oracletb
     643300      - A     0:00    0  3012 77076    xx 50917 74064  0.0  0.0 oracletb
     651514      - A     0:03    1  3196 77196    xx 50917 74064  0.1  0.0 oracletb
     671762      - A     0:05    2  3120 77184    xx 50917 74064  0.2  0.0 oracletb
     675850      - A     0:04    0  3120 77120    xx 50917 74064  0.2  0.0 oracletb
     680040      - A     0:06    0  3120 77184    xx 50917 74064  0.2  0.0 oracletb
     688218      - A     0:05    0  3116 77180    xx 50917 74064  0.2  0.0 oracletb
     700614      - A     0:09    2  3120 77120    xx 50917 74064  0.2  0.0 oracletb
     ......

RAC Specific Processes

| 1 Comment

The following are the additional processes spawned for supporting the multi-instance coordination:

LMON - The Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage the global enqueues and the resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS)

LMDx - The Global Enqueue Service Daemon (LMD) is the lock agent process that manages enqueue manager service requests for Global Cache Service enqueues to control access to global enqueues and resources. The LMD process also handles deadlock detection and remote enqueue requests. Remote resource requests are the requests originating from another instance.

Sections

1. Overview of memory management
2. The mysterious 880 MB limit on x86
3. The difference among VIRT, RES, and SHR in top output
4. The difference between buffers and cache
5. Swappiness (2.6 kernels)

1. Overview of memory management
Traditional Unix tools like 'top' often report a surprisingly small amount of free memory after a system has been running for a while. For instance, after about 3 hours of uptime, the machine I'm writing this on reports under 60 MB of free memory, even though I have 512 MB of RAM on the system. Where does it all go?

The biggest place it's being used is in the disk cache, which is currently over 290 MB. This is reported by top as "cached". Cached memory is essentially free, in that it can be replaced quickly if a running (or newly starting) program needs the memory.

The reason Linux uses so much memory for disk cache is because the RAM is wasted if it isn't used. Keeping the cache means that if something needs the same data again, there's a good chance it will still be in the cache in memory. Fetching the information from there is around 1,000 times quicker than getting it from the hard disk. If it's not found in the cache, the hard disk needs to be read anyway, but in that case nothing has been lost in time.

To see a better estimation of how much memory is really free for applications to use, run the command:

free -m

The -m option stands for megabytes, and the output will look something like this:

             total       used       free     shared    buffers     cached
Mem: 503 451 52 0 14 293
-/+ buffers/cache: 143 360
Swap: 1027 0 1027

The -/+ buffers/cache line shows how much memory is used and free from the perspective of the applications. Generally speaking, if little swap is being used, memory usage isn't impacting performance at all.

Notice that I have 512 MB of memory in my machine, but only 503 is listed as available by free. This is mainly because the kernel can't be swapped out, so the memory it occupies could never be freed. There may also be regions of memory reserved for/by the hardware for other purposes as well, depending on the system architecture.

2. The mysterious 880 MB limit on x86
By default, the Linux kernel runs in and manages only low memory. This makes managing the page tables slightly easier, which in turn makes memory accesses slightly faster. The downside is that it can't use all of the memory once the amount of total RAM reaches the neighborhood of 880 MB. This has historically not been a problem, especially for desktop machines.

To be able to use all the RAM on a 1GB machine or better, the kernel needs recompiled. Go into 'make menuconfig' (or whichever config is preferred) and set the following option:

Processor Type and Features ---->
High Memory Support ---->
(X) 4GB

This applies both to 2.4 and 2.6 kernels. Turning on high memory support theoretically slows down accesses slightly, but according to Joseph_sys and log, there is no practical difference.

3. The difference among VIRT, RES, and SHR in top output
VIRT stands for the virtual size of a process, which is the sum of memory it is actually using, memory it has mapped into itself (for instance the video card's RAM for the X server), files on disk that have been mapped into it (most notably shared libraries), and memory shared with other processes. VIRT represents how much memory the program is able to access at the present moment.

RES stands for the resident size, which is an accurate representation of how much actual physical memory a process is consuming. (This also corresponds directly to the %MEM column.) This will virtually always be less than the VIRT size, since most programs depend on the C library.

SHR indicates how much of the VIRT size is actually sharable (memory or libraries). In the case of libraries, it does not necessarily mean that the entire library is resident. For example, if a program only uses a few functions in a library, the whole library is mapped and will be counted in VIRT and SHR, but only the parts of the library file containing the functions being used will actually be loaded in and be counted under RES.

4. The difference between buffers and cache
Buffers are associated with a specific block device, and cover caching of filesystem metadata as well as tracking in-flight pages. The cache only contains parked file data. That is, the buffers remember what's in directories, what file permissions are, and keep track of what memory is being written from or read to for a particular block device. The cache only contains the contents of the files themselves.

Corrections and additions to this section welcome; I've done a bit of guesswork based on tracing how /proc/meminfo is produced to arrive at these conclusions.

5. Swappiness (2.6 kernels)
Since 2.6, there has been a way to tune how much Linux favors swapping out to disk compared to shrinking the caches when memory gets full.

ghoti adds:
When an application needs memory and all the RAM is fully occupied, the kernel has two ways to free some memory at its disposal: it can either reduce the disk cache in the RAM by eliminating the oldest data or it may swap some less used portions (pages) of programs out to the swap partition on disk.
It is not easy to predict which method would be more efficient.
The kernel makes a choice by roughly guessing the effectiveness of the two methods at a given instant, based on the recent history of activity.

Before the 2.6 kernels, the user had no possible means to influence the calculations and there could happen situations where the kernel often made the wrong choice, leading to thrashing and slow performance. The addition of swappiness in 2.6 changes this.
Thanks, ghoti!

Swappiness takes a value between 0 and 100 to change the balance between swapping applications and freeing cache. At 100, the kernel will always prefer to find inactive pages and swap them out; in other cases, whether a swapout occurs depends on how much application memory is in use and how poorly the cache is doing at finding and releasing inactive items.

The default swappiness is 60. A value of 0 gives something close to the old behavior where applications that wanted memory could shrink the cache to a tiny fraction of RAM. For laptops which would prefer to let their disk spin down, a value of 20 or less is recommended.

As a sysctl, the swappiness can be set at runtime with either of the following commands:

# sysctl -w vm.swappiness=30
# echo 30 >/proc/sys/vm/swappiness

The default when Gentoo boots can also be set in /etc/sysctl.conf:

# Control how much the kernel should favor swapping out applications (0-100)
vm.swappiness = 30

Some patchsets allow the kernel to auto-tune the swappiness level as it sees fit; they may not keep a user-set value.

Linux系统状态查看工具Sysstat

原文链接:
 
1、关于 Sysstat;

Sysstat 是一个软件包,包含监测系统性能及效率的一组工具,这些工具对于我们收集系统性能数据,比如CPU使用率、硬盘和网络吞吐数据,这些数据的收集和分析,有利于我们判断系统是否正常运行,是提高系统运行效率、安全运行服务器的得力助手;


Sysstat 软件包集成如下工具:

    * iostat 工具提供CPU使用率及硬盘吞吐效率的数据;
    * mpstat 工具提供单个处理器或多个处理器相关数据;
    * sar 工具负责收集、报告并存储系统活跃的信息;
    * sa1 工具负责收集并存储每天系统动态信息到一个二进制的文件中。它是通过计划任务工具cron来运行,
        是为sadc所设计的程序前端程序;
    * sa2 工具负责把每天的系统活跃性息写入总结性的报告中。它是为sar所设计的前端 ,要通过cron来调用
    * sadc 是系统动态数据收集工具,收集的数据被写一个二进制的文件中,它被用作sar工具的后端;
    * sadf 显示被sar通过多种格式收集的数据;


2、安装 Sysstat和运行;

对于大多数系统,都有这个软件包,软件名以sysstat开头。我们可以通过网络安装它;


2.1 对于Debian或deb软件包为基础的系统;

[root@localhost ~]# apt-get install sysstat


2.2 Fedora 系统或以RPM包管理的系统;

[root@localhost ~]# yum install sysstat

如果是RPM包,请用下面的命令来安装;

[root@localhost ~]#rpm -ivh sysstat*.rpm

如果您想了解yum 和rpm 软件包管理工具,请参考:《Fedora / Redhat 软件包管理指南》

原文地址:
http://logzgh.itpub.net/post/3185/284963

这三款存储都属于高端存储。厂家号称的IOPS都非常的高。
但是实际当中哪一款表现的性能会更好一些呢?

前段时间,piner在他的blog上也介绍了这几款存储的基本架构图。有兴趣的可以去看看。

这三款存储,各自的架构都有自己的特点,也有自己的cache算法。从这个角度上来分析,孰好孰劣实在不好说。并且从前端的架构与cache算法来说,我私下认为,三款产品相差应该都不会太大。

但是DMX3和HDS USP的后端都是采用环状的架构,也就是采用FC-AL协议。而DS8000采用的却是交换设计,应该采用的是FC-SW协议。

FC-AL仲裁环的协议规定,同一时刻只有两个设备能传送数据。这是因为这个环就和hub一样,是一个共享的总线。当然同一时刻只能由两个设备进行p2p通信。从理论上讲当50~60颗磁盘连接在一个光纤环路上的时候,光纤通道基本达到性能上限。

采用环状设计的性能,与数据分布和磁盘本身的cache有很大的关系。如果某一块磁盘占用长时间,那么这块磁盘的缓存必将挖空,此时就要到读盘取数据,这样造成了环等待,从而必将影响了整个的存储的性能。

根据实际情况,hds usp的IOPS超过16000以后,性能就急剧下降,出现了拐点,也许跟这个环状的设计有很大关系。

而DS8000采用的是交换设计,后端卡通过内部交换机同一时刻可以访问到任一块磁盘的。所以从后端来讲,DS8000应该优于其他两款产品。

据IBM自己说,它在SPC-1(随机次盘I/O)和SPC-2(顺序磁盘I/O)性能指标测试中排名第一。不过EMC和HDS没有公开这两项测试的数据。但是IBM的存储在大家的印象当中是不咋地的,也许是被它的DS4000系列的产品拖累的。(DS4000系列的产品都是OEM的,DS6000,DS8000才是IBM自己研发的)


不过在实际当中,到底哪个好呢?不敢说。有谁有实际的经验吗?

妊娠是一个复杂的过程,卵子受精后,进入宫腔,胚胎及附属物迅速生长发育直至成熟的过程中,每个孕周都会有不同的变化。在孕早期的各个周里你的小宝宝会是什么样呢,下面的文章会详细告诉你。

4周:胎儿只有0.2厘米。受精卵刚完成着床,羊膜腔才形成,体积很小。超声还看不清妊娠迹象。

5周:胎儿长到0.4厘米,进入了胚胎期,羊膜腔扩大,原始心血管出现,可有搏动。B超可看见小胎囊,胎囊约占宫腔不到1/4,或可见胎芽。

6周:胎儿长到0.85厘米,胎儿头部、脑泡、额面器官、呼吸、消化、神经等器官分化,B超胎囊清晰可见,并见胎芽及胎心跳。

7周:胎儿长到1.33厘米,胚胎已具有人雏形,体节已全部分化,四肢分出,各系统进一步发育。B超清楚看到胎芽及胎心跳,胎囊约占宫腔的l/3。

8周:胎儿长到1.66厘米,胎形已定,可分出胎头、体及四肢,胎头大于躯干。B超可见胎囊约占官腔1/2,胎儿形态及胎动清楚可见,并可看见卵黄囊。

9周:胎儿长到2.15厘米,胎儿头大于胎体,各部表现更清晰,头颅开始钙化、胎盘开始发育。B超可见胎囊几乎占满宫腔,胎儿轮廓更清晰,胎盘开始出现。

10周:胎儿长到2.83厘米,胎儿各器官均已形成,胎盘雏形形成。B超可见胎囊开始消失,月芽形胎盘可见,胎儿活跃在羊水中。

11周:胎儿长到3.62厘米,胎儿各器官进一步发育,胎盘发育。B超可见胎囊完全消失,胎盘清晰可见。

12周:胎儿长到4.58厘米,外生殖器初步发育,如有畸形可以表现,头颅钙化更趋完善。颅骨光环清楚,可测双顶径,明显的畸形可以诊断,此后各脏器趋向完善。


孕13周:双顶径的平均值为2.52士0.25腹围的平均值为6.90士l.65股骨长为1.17士0.31.

孕14周:双顶径的平均值为2.83士0.57腹围的平均值为7.77士1.82股骨长为1.38士0.48.

孕15周:双顶径的平均值为3.23士0.51腹围的平均值为9.13士1.56股骨长为1.74士0.58.

孕16周:双顶径的平均值为3.62士0.58腹围的平均值为10.32士1.92股骨长为2.10士0.51.

孕17周:双顶径的平均值为3.97士0.44腹围的平均值为11.49士1.62股骨长为2.52士0.44.

孕18周:双顶径的平均值为4.25士0.53腹围的平均值为12.41士l.89股骨长为2.71士0.46.

孕19周:双顶径的平均值为4.52士0.53腹围的平均值为13.59士2.30股骨长为3.03士0.50.

孕20周:双顶径的平均值为4.88士0.58腹围的平均值为14.80士l.89股骨长为3.35士O.47.

孕21周:双顶径的平均值为5.22士0.42腹围的平均值为15.62士1.84股骨长为3.64士0.40.

孕22周:双顶径的平均值为5.45士0.57腹围的平均值为16.70士2.23股骨长为3.82士0.47.

孕23周:双顶径的平均值为5.80士0.44腹围的平均值为17.90士1.85股骨长为4.21士0.41.

孕24周:双顶径的平均值为6.05士0.50腹围的平均值为18.74士2.23股骨长为4.36士0.51.

孕25周:双顶径的平均值为6.39士0.70腹围的平均值为19.64士2.20股骨长为4.65士0.42.

孕26周:双顶径的平均值为6.68士0.61腹围的平均值为21.62士2.30股骨长为4.87士O.41.

孕27周:双顶径的平均值为6.98士0.57腹围的平均值为21.81士2.12股骨长为5.10士0.41.

孕28周:双顶径的平均值为7.24士O.65腹围的平均值为22.86士2.41股骨长为5.35士0.55.

孕29周:双顶径的平均值为7.50士0.65,腹围的平均值为:23.71士1.50股骨长的平均值为5.61士0.44.

孕30周:双顶径的平均值为7.83士0.62腹围的平均值为:24.88士2.03股骨长的平均值为5.77士0.47.

孕31周:双顶径的平均值为8.06士0.60腹围的平均值为:25.78士2.32股骨长的平均值为6.03士0.38.

孕32周:双顶径的平均值为8.17士0.65腹围的平均值为:26.20士2.33股骨长的平均值为6.43士0.49.

孕33周:双顶径的平均值为8.50士0.47腹围的平均值为27.78:士2.30股骨长的平均值为6.52士0.46.

孕34周:双顶径的平均值为8.61士0.63腹围的平均值为:27.99士2.55股骨长的平均值为6.62士0.43.

孕35周:双顶径的平均值为8.70士0.55腹围的平均值为:28.74士2.88股骨长的平均值为6.71士0.45.

孕36周:双顶径的平均值为8.81士0.57腹围的平均值为:29.44士2.83股骨长的平均值为6.95士0.47.

孕37周:双顶径的平均值为9.00士0.63腹围的平均值为:30.14士2017股骨长的平均值为7.10士0.52.

孕38周:双顶径的平均值为9.08士0.59腹围的平均值为:30.63士2.83股骨长的平均值为7.20士0.43.

孕39周:双顶径的平均值为9.21士0.59腹围的平均值为:31.34士3.12股骨长的平均值为7.34士0.53.

孕40周:双顶径的平均值为9.28士0.50腹围的平均值为:31.49士2.79股骨长的平均值为7.4士0.53

Automatic Fault Recovery

Oracle performs recovery automatically on two occasions:

  • At the first database open after the crash of a single-instance database or all instances of an Oracle Real Applications Cluster database (crash recovery).
  • When some but not all instances of an Oracle Real Application Clusters configuration fail (instance recovery). The recovery is performed automatically by a surviving instance in the configuration.

The important point is that in both crash and instance recovery, Oracle will automatically recover data to a transactionally consistent state.  This means the datafiles will contain all committed changes, and will not contain any uncommitted changes.  Oracle returns to the transactionally consistent state by rolling forward changes captured in the log files but not the datafiles, and rolling back changes that had not been committed.  This roll forward and roll back process is called crash recovery.  In a Real Application Clusters environment, this process is performed by a surviving instance and called instance recovery.

Why is recovery necessary?

To improve performance, Oracle keeps many changes in memory, even after they are committed.  It may also write data to the datafiles to free up memory, even though the changes have not been committed.  At the time of a failure, all data in memory is lost.  In order to ensure no committed changes are lost, Oracle records all operations in an online redo logfile.  The information in the log file allows Oracle to redo any operations that may be lost in a failure.  Writing to the logfile does not hurt performance, because these writes are sequential and very fast.  Writing to datafiles on the other hand is random and can be very slow because the disk block to be modified on disk must be located, and the disk head properly positioned for every write.

disk_layout.gif (13162 bytes)

Cache Recovery (Roll Forward)

During cache recovery, Oracle replays transactions in the online redo log beginning with the checkpoint position.  The checkpoint position in the place in the redo log where changes associated with previous redo entries had been saved to the datafiles before the failure.  As Oracle replays the redo operations, it applies both committed and uncommitted changes to the datafiles.  At the conclusion of the roll forward phase, the data files contain all committed changes, as well as new uncommitted changes (applied during roll forward) and old uncommitted changes (saved to the datafiles to free up space in buffer cache prior to the failure).

The database cannot open until the roll forward phase is complete.

Transaction Recovery (Roll Back)

During transaction recovery, Oracle searches out changes associated with dead transactions that had not committed before the failure occurred.  Undo blocks (whether in rollback segments or automatic undo tablespaces) record database actions that should be undone during certain database operations. In database recovery, the undo blocks roll back the effects of uncommitted transactions previously applied by the rolling forward phase. After the roll forward, any changes that were not committed must be undone. Oracle applies undo blocks to roll back uncommitted changes in data blocks that were either written before the crash or introduced by redo application during cache recovery. This process is called rolling back or transaction recovery.  Oracle can roll back multiple transactions simultaneously as needed. All transactions systemwide that were active at the time of failure are marked as dead. Instead of waiting for SMON to roll back dead transactions, new transactions can recover blocking transactions themselves to get the row locks they need.

some useful link:

http://www.oracle.com/technology/deploy/availability/htdocs/std_recovery.html

http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/backrec.htm

 

Pages

Powered by Movable Type 6.3.2

About this Archive

This page is an archive of entries from July 2007 listed from newest to oldest.

June 2007 is the previous archive.

August 2007 is the next archive.

回到 首页 查看最近文章或者查看所有归档文章.