血糖:空腹5-6.4左右, 午餐后:2H 9.0 3.5H 10.1 4.5H 5.8 饭前运动好还是饭后一颗拜糖平 饭后一颗二甲双胍片,正常吗

扫扫二维码,随身浏览文档
手机或平板扫扫即可继续访问
16000m3_h有机废气RTO处理技术方案
举报该文档为侵权文档。
举报该文档含有违规或不良信息。
反馈该文档无法正常浏览。
举报该文档为重复文档。
推荐理由:
将文档分享至:
分享完整地址
文档地址:
粘贴到BBS或博客
flash地址:
支持嵌入FLASH地址的网站使用
html代码:
&embed src='/DocinViewer-4.swf' width='100%' height='600' type=application/x-shockwave-flash ALLOWFULLSCREEN='true' ALLOWSCRIPTACCESS='always'&&/embed&
450px*300px480px*400px650px*490px
支持嵌入HTML代码的网站使用
您的内容已经提交成功
您所提交的内容需要审核后才能发布,请您等待!
3秒自动关闭窗口乐清市虹桥畅玮电子厂
信用指数:
证书荣誉:
企业认证:
企业信息完整度:
经营模式:
所在地区:
浙江省温州市
主营产品:
轻触开关;DC插座;耳机插座;船型开关;自锁开关
入驻时间:
商铺地址:
电话: 86-3
传真: 86-3
邮编: 325609
地址: 浙江省温州市中国 浙江 乐清市 乐清市 虹桥镇 蒲歧特色工业园区
联系我时请说是在淘金地上看到的
不是你想要的产品?
,让更多供应商主动联系你!
轻触按键开关2X4/3X6/4X/4.5X4/5X5/6X6/12*12等开关按键帽齐全
发货地点:
浙江省温州市
发布时间:
产品类别:
"轻触按键开关2X4/3X6/4X/4.5X4/5X5/6X6/12*12等开关按键帽齐全"详细信息
额定发热电流
轻触开关系列
影音产品、.数码产品、遥控器、通讯产品、家用电器、安防产品、玩具、电脑产品、健身器材、医疗器材、验钞笔、雷射笔按键等等
DC12V(V)
&轻触开关主要技术指标Specification of Tact switch series使用温度范围Temperature-30~70℃耐&压Withstand VoltageAC250v(50HZ)/min额定负荷Rated LoadDC 12V 0.5A动作力Operation Force100-350G接触电阻Contact Resistance&0.03&O寿&命Life10000次绝缘电阻Insulation Resistance&100M&O注:行程0.3&0.1 H可按客户要求制作(4H-29H)&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&产品特点&轻触开关规格及型号& 1.轻触开关型号:大,中,小龟 2x4 3x3 4x4 5x5 3x6 4.5x4.5 6x6 6.2x6.2 12x12系列& 2.轻触开关类型:插脚 贴片 边两脚 边三脚 边直脚 直脚 五脚 中两脚 平脚 支架 下支架&&&&&&&&&&&&&&&&&&&&&&& 带帽 定位 防水 带灯 常闭& 3.轻触开关类别:普通 环保 环保耐高温型& &号规&& 格弹&& 片面& &板类&&& 型乌龟系列大中小&&&&&A弹片类型&& 1.国产&& 2.进口&& 3.不锈钢&B弹片克力&&& 1.70&20g&& 2.100&20g& &3.180&20g& &4.260&20g& &5.320&20g& &6.520&20g&&&&&&&&&&&1.铁面板&2.铜面板&3.塑料面板&<span style="color: #<span style="color: #H&<span style="color: #<span style="color: #H 1.5H贴片&& 防水<span style="color: #<span style="color: #H 1.6H 1.7H 2.0H 2.2H 2.5H 4.3H铜头&塑料头&&防水<span style="color: #X4.5<span style="color: #H 4.3H 4.5H 4.8H 5H 5.5H 6H 7H 8H插脚&贴片&&边两脚&&边三脚<span style="color: #<span style="color: #H 3.5H 4.3H 5H 7H插脚&贴片&&支架&&拉绅&&&&<span style="color: #<span style="color: #.3H 4.5H 4.8H 5H 6H 6.5H 7H 7.3H 7.5H 7.8H 8H 8.5H 9H 9.5H 10H 10.2H 10.5H 10.7H 11H 11.5H 12H 12.5H 13H 13.5H 14H 14.5H 15H 15.5H 16H 16.5H 17H 1 7.5H 18H 19H 20H 21H 22H 23H 24H 25H 26H 27H 28H 29H&&&插脚&贴片&边两脚&边三脚&边直脚&直脚&&&&&五脚&中两脚&&平脚&支架&下支架&带帽&防水&带灯&&常闭<span style="color: #<span style="color: #H 2.7H 3.1H 3.4H 3.7H 4.3H插脚&&&& 四脚贴片&&& 五脚贴片&&&<span style="color: #<span style="color: #H 4.5H 5H 6H 6.5H 7H 7.3H 7.5H 8H 8.5H 9H 9.5H 10H 10.2H 10.5H 10.7H 11H 11.5H 12H 12.5H 13H 13.5H 14H 14.5H 15H 15.5H 16H 16.5H 17H 17.5H 18H 19H 20H 21H 22H&&&插脚&&贴片&边两脚&定位&防水公司产品介绍&&&&&&&&&乐清市虹桥畅玮电子是一家以开关,插座为主导的生产,销售厂家。&&&&&主营产品系列:&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&轻 触 开 关: 2*4 3*3 4*4 4.5*4.5 5*5 6*6 6.2*6.2 12*12 直插 贴片 支架 边两脚&&&&&&&&&&&&&&&&&&&&&&&& 平脚 中两脚 常闭型 防水型 带灯型 带护套,带帽型&&自 锁 开 关: 5.8*5.8 7*7 8*8 8.5*8.5 自锁 无锁 平头 高头 单排两脚 单排三脚&&&&&&&&&&&&&&&&&&&&&&&& 双排六脚&&船 型 开 关: KCD1 KCD2 KCD3 KCD4 短脚 两档 三档 圆形 椭圆形 带灯&&&&&&&&&&&&&&&&&&& &&&&& 防水型 大点流船型&&DC 电源插座: DC直插系列 DC(SMT)贴片系&&耳 机 插 座: 2.5mm直插式 2.5mm(SMT)贴片式 3.5mm直插式&&&&&&&&&&&&&&&&&&&&&&&& &3.5mm(SMT)贴片式&&RCA(AV)插座: 单孔 两孔 三孔 四孔 六孔 八孔 系列&&AC 电源插座: 品字插座 八字插座 梅花插座 二合一插座 三合一插座&&&&&&&&&&&&&&&&&&&&&&&&& AC母座&&&&&&&&&&&&&&&&&&&&&我们可根据您的要求定做您所需要的产品,热忱的欢迎您来电咨询订购!
联系我时请说是在淘金地上看到的
类别的产品
为你推荐轻触按键开关2X4/3X6/4X/4.5X4/5X5/6X6/12*12等开关按键帽齐全类似的相关产品
此处显示 class "clr" 的内容
查看同类的其它产品: & & &
热门搜索:
此处显示 class "clr" 的内容
按货源分类
按字母分类
淘金地--会员登录
* 请选择或直接输入您关心的问题:
--(选择常用问题)--
我公司有意购买此产品,可否提供此产品的报价单和最小起订量?
我对贵公司的产品非常感兴趣,能否发一些详细资料给我参考
请您发一份详细的产品规格说明,谢谢!
请问贵公司产品是否可以代理?代理条件是什么?
* 请选择您想了解的产品信息:
* 联系人:
* 手机号码:
您的询盘已经提交成功!
以下供应商也提供类似产品,建议您将询盘信息一并发送给他们本站已经通过实名认证,所有内容由孙琳大夫本人发表
餐后血糖高多久能降下来?_糖尿病
状态:就诊前
中午血糖高未必跟不锻炼有直接关系,可能你这一餐有问题,请回忆一下就餐情况。
另外,你的血糖水平还可以。
我认为,如果能克服焦虑的心情,平静对待疾病,你的血糖会更好。
大夫郑重提醒:因不能面诊患者,无法全面了解病情,以上建议仅供参考,具体诊疗请一定到医院在医生指导下进行!
状态:就诊前
真情寄语:
送一束美丽的鲜花给您,感谢您的无私帮助。
状态:就诊前
孙大夫您好!我去找您就诊时你说过糖尿病早期用胰岛素治疗也很好,您看我可以用胰岛素治疗吗?要是胰岛素比吃药好是否能用胰岛素?我现在心情怎么也提不起来,怎么突然就糖尿病了?我一直很注意,以前餐后就没高过,才3个月不到,就发展成糖尿病了,一时接受不了,餐后还这么高,孙大夫那根据我的情况给我个最好的治疗方案吧,我不怕麻烦,也不怕花钱多,谢谢!
如果早期胰岛素治疗效果当然很好,如果你现在短时使用胰岛素,估计剂量很小。
但就你的血糖水平而言,服药+合理饮食+运动即可,定期复查血糖。
大夫郑重提醒:因不能面诊患者,无法全面了解病情,以上建议仅供参考,具体诊疗请一定到医院在医生指导下进行!
状态:就诊前
真情寄语:
送一束美丽的鲜花给您,感谢您的无私帮助。
状态:就诊前
孙大夫您好!经一周治疗,昨天去医院查血糖空腹6.95,早餐后2小时9.87(没运动),这样的效果可以吗?我现在服用二甲双胍肠溶片每3天次,每次一片。这样继续用药可以吗?另外我还有点不明白,昨天晚饭后打太极拳40分钟,测餐后2小时血糖为6.7,吃的一样,为什么差距这么大?运动这么重要?为了知道血糖仪准确率,昨天早上我去医院查空腹血糖同时也用血糖仪测了下,血糖仪是7.5,医院是6.95,差别不算大,说明血糖仪还行,今天早上在家用血糖仪测血糖空腹5.8,没测餐后,怎么2天差别这么大?还有餐后的,昨天在医院测的是吃完饭没运动,去医院检查前几天我一直在家用血糖仪测,空腹一直大部分在6以下,餐后在6点几多至8点左右(都是运动后);空腹在家查的时间都是早6点左右,去医院是早8点多,早上8点多和6点血糖不一样吗?我注意了好几次啦,好像空腹8点多和5-6点不一样,8点多的明显高,都是空腹,什么原因差别这么大?再有,孙大夫昨天我已经给你打过电话了,你那么忙,还记得给我打回电话来,您对病人很负责,说话和蔼热情,没有一点架子,我太感动了,我啰啰嗦嗦的,您不嫌麻烦,很理解病人的心情,我这段时间感觉很无助,您的一个电话,一句温馨的话,就使我感到很温暖,就觉着有依靠,每次向您咨询您都及时回复,给予治疗指导,真是谢谢您了孙大夫,我现在可能刚开始得病,一时不适应,您和我说“你什么都别想,放松心情,不要太紧张”,我静下心来想想,您说得对,病已经得了,痛苦郁闷也是一天天过,精神愉快也是一天天过,为什么不能高兴呢?我想经过一段时间我会适应的,谢谢!
状态:就诊前
孙大夫您好!我还是想用胰岛素治疗一段时间,因为早期用胰岛素可以让胰岛功能再没有很大伤害的情况下休息下,应该是越早越好吧?你看呢?我想在家自己打,用量应该是多少?吃饭怎么吃?打胰岛素药还用吗?怎么用药?应该注意什么问题?打胰岛素多打一段时间行吗?打半年或者一年,会产生对胰岛素的依赖吗?
状态:就诊前
孙大夫对不起,我真是很纠结,也很矛盾,关于打不打胰岛素还是您给我我决定吧,我听您的,谢谢!
状态:就诊前
孙大夫您好!我想知道我的糖尿病是属于胰岛素抵抗还是分泌异常,我12月19日在贵院做的C肽释放试验,c肽试验结果:空腹血糖6.9、C肽1.73;半小时血糖15.2、C肽2.99;一小时血糖17.9、C肽5.61;两小时血糖19.6、C肽11.01;三小时血糖19.6,C肽5.8,;糖化血红蛋白5.6%。胰岛素分泌延迟,胰岛素分泌量够吗?像这种情况打胰岛素合适吗?上次我去贵院,耿博士建议我打胰岛素,我现在血糖空腹一般在6左右,餐后在6-9点多不等,这些都是在家用血糖仪查的,吃药比格列酮一天一个,二甲双胍每天3次,每次2片,伏格列波糖每餐前一个(因为吃的饭不多,餐后高有点着急,自己加的,吃了一周了),用药不少了,但是血糖还没到正常范围,是不是只要是糖尿病打胰岛素都好?要是那样我就过了年去住院打胰岛素,您看呢?我听您的建议吧。
你可以暂时不注射胰岛素,将二甲双胍改为每日三次,每次一片半,1周后监测血糖后再联系。
大夫郑重提醒:因不能面诊患者,无法全面了解病情,以上建议仅供参考,具体诊疗请一定到医院在医生指导下进行!
状态:就诊前
真情寄语:
送一朵美丽的鲜花给您,感谢您的无私帮助。
状态:就诊前
孙主任您好!出院一周了,血糖前几天还可以,但是最近3天感觉不太好,空腹餐前都有点升高,住院期间空腹餐前睡前夜里都不错,就是这3天来不是太好,饭后2小时也时有升高,我现在用药赖脯胰岛素每天3次,每次3个单位,这3天的血糖记录:3月1日空腹6.0 早餐后6.1 午餐前7.0午餐后10.3晚餐前6.0晚餐后6.3 睡前5.9
3月2日空腹6.5 早餐后7.8 午餐前7.3 午餐后4.5 晚餐前 6.6 晚餐后8.7 睡前6.0
夜间3点半5.5
3月3日空腹5.4 早餐后7.3 午餐前6.3
午餐后10.2
在住院期间空腹包括餐前都在5点多,没过6,但是这3天就高了,餐后也有高升,孙主任你看需要调整方案吗?
午餐后血糖波动较大,最低是4.5,个人认为与饮食运动有关,如果同样的饭,血糖一直高,说明胰岛素剂量偏小,如果不同的食品,血糖波动较大,则要考虑食材的问题了。
你可在测下午餐后两小时血糖,并记录每餐的饮食情况。
大夫郑重提醒:因不能面诊患者,无法全面了解病情,以上建议仅供参考,具体诊疗请一定到医院在医生指导下进行!
状态:就诊前
日期 空腹 早餐后2h 午餐前 午餐后2h 晚餐前 晚餐后2h
睡前10点 夜3点
6.1
午餐前 7.9 晚餐前 6.3
午餐后1h 7.2 晚餐后1h 10.4
午餐后2h 6.0
晚餐后2h 4.9
午餐后3h 10.8 晚餐后3h 9.1
午餐后4h 7.7 晚餐后4h
午餐后5h 6.3 晚餐后5h
孙主任这是我住院以来的血糖记录,回家后的都在上面,你看看怎样?但是我最近发现我1小时2小时都可以,但是3小时就高上去了,今天做了个试验,连续测了几个小时的,麻烦看看怎么回事?关系大吗?谢谢!
状态:就诊前
孙主任你好!连续测了2天的餐后血糖,具体如下:
午餐前 7.9 晚餐前 6.3
午餐后1h 7.2 晚餐后1h 10.4
午餐后2h 6 晚餐后2h 4.9
午餐后3h 10.8 晚餐后3h 9.1
午餐后4h 8.9 晚餐后4h 6.4
午餐后5h 7.7 晚餐后5h
早餐前 4.3 午餐前 4.9
早餐后1h 6.3 午餐后1h
早餐后2h 8.4 午餐后2h 8.3
早餐后3h 10.4 午餐后3h 6.2
早餐后4h 5.9 午餐后4h 5.7
午餐前 4.9 午餐后5h
餐后3小时高于2小时,是药效时间不够吗?需要调整否?12日早午餐前胰岛素已减一个单位,12日午餐后2个半小时运动了下,10分钟站立下蹲150个,3小时就低了,以后锻炼要在餐后2小时以后吗?谢谢!
一般来讲,饭后1小时血糖高于2小时血糖,而3小时血糖要低于2小时血糖。
你的血糖我认为挺好的,你要学着减少情绪化的影响。
目前减少胰岛素是对的,因未血糖偏低时,身体的升糖激素分泌会增加。
可以适当运动,偶尔血糖升高对身体影响不大。
毕竟生活当中需要我们关注的事情很多,纠结于一两次血糖的变化没有意义。
大夫郑重提醒:因不能面诊患者,无法全面了解病情,以上建议仅供参考,具体诊疗请一定到医院在医生指导下进行!
状态:就诊前
孙主任您好!好久没和你联系了,我现在已经停胰岛素18天了,也没吃药。空腹血糖在5左右,餐后在5-7之间,大多在6点多,我坚持运动,偶尔也不运动,都挺好,最高也就是8点多,也没几次,我从得病以来,就是你给我医治,其间我有过很多次纠结,有时候心态不好,老着急,每次你都耐心的开导我,你的耐心和高超医术使我有了今天的状态,谢谢您了。孙主任我还想请教你一个问题,我这样能坚持多久?我是明显的家族遗传糖尿病,父母均是糖尿病,遗传的糖尿病和不是遗传得的糖尿病有区别吗?会不会遗传的比非遗传的难治疗?恢复也慢?我以后还应该注意些什么?谢谢主任!
非常高兴,看到你的血糖控制的很好。
实际上,你目前不用服药,坚持运动、合理饮食就很好。
至于多长时间后需用药,我认为现在不必思考这个问题,心情愉快、定期复查血糖就可以了。
大夫郑重提醒:因不能面诊患者,无法全面了解病情,以上建议仅供参考,具体诊疗请一定到医院在医生指导下进行!
状态:就诊前
孙主任您好:我昨天晚餐后没运动试了下,测餐前5.6,餐后2小时9.9,今天早上空腹5.9,早餐后运动打太极拳一小时,餐后2小时4.4,午餐前5.3,午餐,2小时5.3,我低于5就有饥饿感,餐后不运动就高,运动了就低,是不是运动量大?前天去饭店吃饭低到2.9,还没感觉,很奇怪。我把最近我测的血糖记录发给你看看,没有很低的现象。
日期 凌晨3点
早餐后 午餐前 午餐后 晚餐前 晚餐后 睡前
投诉类型:
投诉说明:(200个汉字以内)
孙琳大夫的信息
糖尿病及其并发症、甲状腺疾病、肾上腺疾病、垂体疾病、骨质疏松、高脂血症、痛风等疾病的诊断和治疗;继发...
孙琳,医学博士。主任医师,济宁医学院内科学教授。硕士研究生导师。济宁医学院附属医院内分泌一科主任。19...
内分泌科可通话专家
北京协和医院
副主任医师
北京协和医院
北京协和医院
大连市中心医院
北京协和医院
副主任医师
北京协和医院
郑大一附院
内分泌与代谢病科
上海第六人民医院
内分泌代谢科
山东省立医院
山西省人民医院
好大夫在线电话咨询服务Regression with Stata, Chapter 4: Beyond OLS
Regression
with Stata
Chapter 4 - Beyond OLS
Chapter Outline
4.1 Robust Regression Methods
&&& &&& 4.1.1 Regression with Robust Standard Errors
&&&&&&& 4.1.2 Using the Cluster Option
&&& &&& 4.1.3 Robust Regression
&&& &&& 4.1.4 Quantile Regression
&&& 4.2 Constrained Linear Regression
&&& 4.3 Regression with Censored or Truncated Data
&&& &&& 4.3.1 Regression with Censored Data
&&& &&& 4.3.2 Regression with Truncated Data
&&& 4.4 Regression with Measurement Error
&&& 4.5 Multiple Equation Regression Models
&&& &&& 4.5.1 Seemingly Unrelated Regression
&&& &&& 4.5.2 Multivariate Regression
&&& 4.6 Summary
&&& 4.7 Self assessment
&&& 4.8 For more information
In this chapter we
will go into various commands that go beyond OLS. This chapter is a bit different from
the others in that it covers a number of different concepts, some of which may be new
to you. These extensions, beyond OLS, have much of the look and feel of OLS but will
provide you with additional tools to work with linear models.
The topics will include robust regression methods, constrained linear regression,
regression with censored and truncated data, regression with measurement error, and
multiple equation models.
4.1 Robust Regression Methods
It seems to be a rare dataset that meets all of the assumptions underlying multiple
regression. We know that failure to meet assumptions can lead to biased estimates of
coefficients and especially biased estimates of the standard errors. This fact explains a
lot of the activity in the development of robust regression methods.
The idea behind robust regression methods is to make adjustments in the estimates that
take into account some of the flaws in the data itself. We are going to look at three
approaches to robust regression: 1) regression with robust standard errors including the cluster
option, 2) robust regression using iteratively reweighted least squares, and 3) quantile
regression, more specifically, median regression.
Before we look at these approaches, let's look at a standard OLS regression using the
elementary school academic performance index (elemapi2.dta) dataset.
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2
We will look at a model that predicts the api 2000 scores using the average class size
in K through 3 (acs_k3), average class size 4 through 6 (acs_46), the
percent of fully credentialed teachers (full), and the size of the school (enroll).
First let's look at the descriptive statistics for these variables.& Note the missing
values for acs_k3 and acs_k6.
summarize api00 acs_k3 acs_46 full enroll
Variable |
---------+-----------------------------------------------------
Below we see the regression predicting api00 from acs_k3,
full and enroll. We see that all of the variables are significant except for acs_k3.
regress api00 acs_k3 acs_46 full enroll
Number of obs =
---------+------------------------------
Residual |
---------+------------------------------
Adj R-squared =
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
We can use the test command to test both of the class size variables,
and we find the overall test of these two variables is significant.
test acs_k3 acs_46
acs_k3 = 0.0
acs_46 = 0.0
Prob & F =
Here is the residual versus fitted plot for this regression. Notice that the pattern of
the residuals is not exactly as we would hope.& The spread of the residuals is
somewhat wider toward the middle right of the graph than at the left, where the
variability of the residuals is somewhat smaller, suggesting some heteroscedasticity.
Below we show the avplots.& Although the plots are small, you can see some
points that are of concern. There is not a single extreme point (like we saw in chapter
but a handful of points that stick out.& For example, in the top right graph you can
see a handful of points that stick out from the rest.& If this were just one or two
points, we might look for mistakes or for outliers, but we would be more reluctant to
consider such a large number of points as outliers.
Here is the lvr2plot for this regression. We see 4 points that are
somewhat high in both their leverage and their residuals.&
None of these results are dramatic problems, but the rvfplot suggests that there
might be some outliers and some possibl the avplots have some
observations that look to have high leverage, and the lvr2plot shows some
points in the upper right quadrant that could be influential. We might wish to use
something other than OLS regression to estimate this model. In the next several sections
we will look at some robust regression methods.
4.1.1 Regression with Robust Standard Errors
The Stata regress command includes a robust option for
estimating the standard errors using the Huber-White sandwich estimators. Such robust
standard errors can deal with a collection of minor concerns about failure to meet
assumptions, such as minor problems about normality, heteroscedasticity, or some
observations that exhibit large residuals, leverage or influence. For such minor problems,
the robust option may effectively deal with these concerns.
With the robust option, the point estimates of the coefficients are exactly the
same as in ordinary OLS, but the standard errors take into account issues concerning
heterogeneity and lack of normality. Here is the same regression as above using the robust
option. Note the changes in the standard errors and t-tests (but no change in the
coefficients). In this particular example, using robust standard errors did not change any
of the conclusions from the original OLS regression.
regress api00 acs_k3 acs_46 full enroll, robust
Regression with robust standard errors
Number of obs =
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
4.1.2 Using the Cluster Option
As described in Chapter 2, OLS regression assumes that the residuals are independent.
The elemapi2 dataset contains data on 400 schools that come from 37 school
districts. It is very possible that the scores within each school district may not be
independent, and this could lead to residuals that are not independent within districts.
& We can use the cluster option to indicate that the observations
are clustered into districts (based on dnum) and that the observations
may be correlated within districts, but would be independent between districts.&
By the way, if we did not know the number of districts, we could quickly find out how
many districts there are as shown below, by quietly tabulating dnum
and then displaying the macro r(r) which gives the numbers of rows in the
table, which is the number of school districts in our data.
quietly tabulate dnum
display r(r)
Now, we can run regress with the cluster option. We do not need to include the
robust option since robust is implied with cluster. Note that the standard errors have
changed substantially, much more so, than the change caused by the robust option by
regress api00 acs_k3 acs_46 full enroll, cluster(dnum)
Regression with robust standard errors
Number of obs =
Number of clusters (dnum) = 37
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
As with the robust option, the estimate of the coefficients are the
same as the OLS estimates, but the standard errors take into account that the observations
within districts are non-independent.& Even though the standard errors are larger in
this analysis, the three variables that were significant in the OLS analysis are
significant in this analysis as well.& These standard errors are computed based on
aggregate scores for the 37 districts, since these district level scores should be
independent. If you have a very small number of clusters compared to your overall sample
size it is possible that the standard errors could be quite larger than the OLS results.
For example, if there were only 3 districts, the standard errors would be computed on the
aggregate scores for just 3 districts.&
4.1.3 Robust Regression
The Stata rreg command performs a robust regression using iteratively reweighted
least squares, i.e., rreg assigns a weight to each observation with higher weights given to
better behaved observations. In fact, extremely deviant cases, those with Cook's D greater than 1,
can have their weights set to missing so that they are not included in the analysis at all.
We will use rreg with the generate option so that we can
inspect the weights used to weight the observations. Note that in this analysis both the
coefficients and the standard errors differ from the original OLS regression. Below we
show the same analysis using robust regression using the rreg command.
rreg api00 acs_k3 acs_46 full enroll, gen(wt)
Robust regression estimates
Number of obs =
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
If you compare the robust regression results (directly above) with the OLS results
previously presented, you can see that the coefficients and standard errors are quite
similar, and the t values and p values are also quite similar. Despite the minor problems
that we found in the data when we performed the OLS analysis, the robust regression
analysis yielded quite similar results suggesting that indeed these were minor problems.
& Had the results been substantially different, we would have wanted to further
investigate the reasons why the OLS and robust regression results were different, and
among the two results the robust regression results would probably be the more
trustworthy.
Let's calculate and look at the predicted (fitted) values (p), the
residuals (r), and the leverage (hat) values (h). Note
that we are including if e(sample) in the commands because rreg can generate
weights of missing and you wouldn't want to have predicted values and residuals for those
observations.
predict p if e(sample)
( fitted values)
(5 missing values generated)
predict r if e(sample), resid
(5 missing values generated)
predict h if e(sample), hat
(5 missing values generated)
Now, let's check on the various predicted values and the weighting. First, we will sort
by wt then we will look at the first 15 observations. Notice that the smallest
weights are near one-half but quickly get into the .7 range.
list snum api00 p r h wt in 1/15
Now, let's look at the last 10 observations. The weights for observations 391 to 395
are all very close to one. The values for observations 396 to the end are missing due to
the missing predictors. Note that the observations above that have the lowest weights are
also those with the largest residuals (residuals over 200) and the observations below with
the highest weights have very low residuals (all less than 3).
list snum api00 p r h wt in -10/l
After using rreg, it is possible to generate predicted values, residuals and
leverage (hat), but most of the regression diagnostic commands are not available after rreg.
We will have to create some of them for ourselves. Here, of course, is the graph of
residuals versus fitted (predicted) with a line at zero. This plot looks much like the OLS
plot, except that in the OLS all of the observations would be weighted equally, but as we
saw above the observations with the greatest residuals are weighted less and hence have
less influence on the results.
scatter r p, yline(0)
To get an lvr2plot we are going to have to go through several steps in order to
get the normalized squared residuals and the means of both the residuals and the leverage
(hat) values.
First, we generate the residual squared (r2) and then divide it by the
sum of the squared residuals. We then compute the mean of this value and save it as a
local macro called rm (which we will use for creating the
leverage vs. residual plot).
generate r2=r^2
(5 missing values generated)
Variable |
---------+-----------------------------------------------------
replace r2 = r2/r(sum)
(395 real changes made)
summarize r2
Variable |
---------+-----------------------------------------------------
local rm = r(mean)
Next we compute the mean of the leverage and save it as a local macro called hm.
summarize h
Variable |
---------+-----------------------------------------------------
local hm = r(mean)
Now, we can plot the leverage against the residual squared as shown below. Comparing
the plot below with the plot from the OLS regression, this plot is much better behaved.
& There are no longer points in the upper right quadrant of the graph.
scatter h r2, yline(`hm') xline(`rm')
Let's close out this analysis by deleting our temporary variables.
drop wt p r h r2
4.1.4 Quantile Regression
Quantile regression, in general, and median regression, in particular, might be
considered as an alternative to rreg. The Stata command qreg does quantile
regression. qreg without any options will actually do a median regression in which
the coefficients will be estimated by minimizing the absolute deviations from the median.
Of course, as an estimate of central tendency, the median is a resistant measure that is
not as greatly affected by outliers as is the mean. It is not clear that median regression
is a resistant estimation procedure, in fact, there is some evidence that it can be
affected by high leverage values.
Here is what the quantile regression looks like using Stata's qreg command. The
coefficient and standard error for acs_k3 are considerably different when
using qreg as compared to OLS using the regress command
(the coefficients are 1.2 vs 6.9 and the standard errors are 6.4 vs 4.3). The coefficients
and standard errors for the other variables are also different, but not as dramatically
different. Nevertheless, the qreg results indicate that, like the OLS
results, all of the variables except acs_k3 are significant.
qreg api00 acs_k3 acs_46 full enroll
Median regression
Number of obs =
Raw sum of deviations
48534 (about 643)
Min sum of deviations 36268.11
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
The qreg command has even fewer diagnostic options than rreg does. About
the only values we can obtain are the predicted values and the residuals.
predict p if e(sample)
( fitted values)
(5 missing values generated)
predict r if e(sample), r
(5 missing values generated)
scatter r p, yline(0)
Stata has three additional commands that can do quantile regression.
iqreg estimates interquantile regressions, regressions of the difference in
quantiles. The estimated variance-covariance matrix of the estimators is obtained via
bootstrapping.
sqreg estimates simultaneous-quantile regression. It produces the same
coefficients as qreg for each quantile. sqreg obtains a bootstrapped
variance-covariance matrix of the estimators that includes between-quantiles blocks. Thus,
one can test and construct confidence intervals comparing coefficients describing
different quantiles.
bsqreg is the same as sqreg with one quantile. sqreg is, therefore,
faster than bsqreg.
4.2 Constrained Linear Regression
Let's begin this section by looking at a regression model using the
hsb2 dataset.&
The hsb2 file is a sample of 200 cases from the Highschool and Beyond
Study (Rock, Hilton, Pollack, Ekstrom & Goertz, 1985). It includes the
following variables: id, female, race, ses, schtyp,
program, read, write, math, science and socst.
The variables read, write, math, science
are the results of standardized tests on reading, writing, math, science and
social studies (respectively), and the variable female is coded 1 if
female, 0 if male.
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/hsb2
Let's start by doing an OLS regression where we predict socst score
from read, write, math, science
and female (gender)
regress socst read write math science female
Number of obs =
---------+------------------------------
Residual |
61.7883375
---------+------------------------------
Adj R-squared =
115.257261
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Notice that the coefficients for read and write are very similar, which
makes sense since they are both measures of language ability.& Also, the coefficients
for math and science are similar (in that they are both
not significantly different from 0). Suppose that we have a theory that suggests that read
should have equal coefficients, and that math
and science
should have equal coefficients as well. We can test the equality
of the coefficients using the test command.
test read=write
read - write = 0.0
Prob & F =
We can also do this with the testparm command, which is especially
useful if you were testing whether 3 or more coefficients were equal.
testparm read write, equal
( 1) - read + write = 0.0
Prob & F =
Both of these results indicate that there is no significant difference in the
coefficients for the reading and writing scores. Since it appears that the coefficients
for math and science are also equal, let's test the
equality of those as well (using the testparm command).
testparm math science, equal
( 1) - math + science = 0.0
Prob & F =
Let's now perform both of these tests together, simultaneously testing that the
coefficient for read equals write and math
equals science.& We do this using two test
commands, the second using the accum option to accumulate the first test
with the second test to test both of these hypotheses together.
test read=write
read - write = 0.0
Prob & F =
test math=science, accum
read - write = 0.0
math - science = 0.0
Prob & F =
Note this second test has 2 df, since it is testing both of the hypotheses listed, and
this test is not significant, suggesting these pairs of coefficients are not significantly
different from each other.& We can estimate regression models where we constrain
coefficients to be equal to each other.& For example, let's begin on a limited scale
and constrain read to equal write. First, we will define a constraint and
then we will run the cnsreg command.
constraint define 1 read = write
. cnsreg socst read write math science female, constraint(1)
Constrained linear regression
Number of obs =
read - write = 0.0
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Notice that the coefficients for read and write are identical, along with
their standard errors, t-test, etc. Also note that the degrees of freedom for the F test
is four, not five, as in the OLS model. This is because only one coefficient is estimated
for read and write, estimated like a single variable equal to the sum of
their values.&in the constrained model, because estimation subject to linear
restrictions does not improve fit relative to the unrestricted model (the
coefficients that would minimize the SSE would be the coefficients from the
unconstrained model). However, in this particular example (because the
coefficients for read and write are already so similar) the decrease in model
fit from having constrained read and write to
equal each other is offset by the change in degrees of freedom .
Next, we will define a second constraint, setting math equal to science.
We will also abbreviate the constraints option to c.
constraint define 2 math = science
. cnsreg socst read write math science female, c(1 2)
Constrained linear regression
Number of obs =
read - write = 0.0
math - science = 0.0
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Now the coefficients for read =& write and math = science
and the degrees of freedom for the model has dropped to three.& Again, the Root MSE
is slightly larger than in the prior model, but we should emphasize only very slightly
larger.& If indeed the population coefficients for read =& write
and math = science, then these combined (constrained) estimates
may be more stable and generalize better to other samples.& So although these
estimates may lead to slightly higher standard error of prediction in this sample, they
may generalize better to the population from which they came.
4.3 Regression with Censored or Truncated Data
Analyzing data that contain censored values or are truncated is common in many research
disciplines. According to Hosmer and Lemeshow (1999), a censored value is one whose value
is incomplete due to random factors for each subject. A truncated observation, on the
other hand, is one which is incomplete due to a selection process in the design of the
We will begin by looking at analyzing data with censored values.
4.3.1 Regression with Censored Data
In this example we have a variable called acadindx which is a weighted
combination of standardized test scores and academic grades. The maximum possible score on
acadindx is 200 but it is clear that the 16 students who scored 200 are not exactly
equal in their academic abilities. In other words, there is variability in academic
ability that is not being accounted for when students score 200 on acadindx. The variable acadindx
is said to be censored, in particular, it is right censored.
Let's look at the example. We will begin by looking at a description of the data, some
descriptive statistics, and correlations among the variables.
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/acadindx
(max possible on acadindx is 200)
Contains data from acadindx.dta
max possible on acadindx is 200
4,800 (99.7% of memory free)
-------------------------------------------------------------------------------
3. reading
4. writing
5. acadindx
academic index
-------------------------------------------------------------------------------
Variable |
---------+-----------------------------------------------------
acadindx |
count if acadindx==200
corr acadindx female reading writing
| acadindx
---------+------------------------------------
acadindx |
Now, let's run a standard OLS regression on the data and generate predicted scores in p1.
regress acadindx female reading writing
Number of obs =
---------+------------------------------
Residual |
108.611597
---------+------------------------------
Adj R-squared =
282.824899
------------------------------------------------------------------------------
acadindx |
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
predict p1
( fitted values)
The tobit command is one of the commands that can be used for regression with
censored data. The syntax of the command is similar to regress with the addition of the ul
option to indicate that the right censored value is 200. We will follow the tobit
command by predicting p2 containing the tobit predicted values.
tobit acadindx female reading writing, ul(200)
Tobit estimates
Number of obs
LR chi2(3)
Prob & chi2
Log likelihood = -718.06362
------------------------------------------------------------------------------
acadindx |
[95% Conf. Interval]
---------+--------------------------------------------------------------------
---------+--------------------------------------------------------------------
(Ancillary parameter)
------------------------------------------------------------------------------
Obs. summary:
184 uncensored observations
16 right-censored observations at acadindx&=200
predict p2
( fitted values)
Summarizing the p1 and p2 scores shows that the tobit predicted
values have a larger standard deviation and a greater range of values.
summarize acadindx p1 p2
Variable |
---------+-----------------------------------------------------
acadindx |
When we look at a listing of p1 and p2 for all students who scored the
maximum of 200 on acadindx, we see that in every case the tobit predicted value is
greater than the OLS predicted value. These predictions represent an estimate of what the
variability would be if the values of acadindx could exceed 200.
list p1 p2 if acadindx==200
Here is the syntax diagram for tobit:
tobit depvar [indepvars] [weight] [if exp] [in range], ll[(#)] ul[(#)]
& & & & [ level(#) offset(varname) maximize_options ]
You can declare both lower and upper censored values. The censored values are fixed in
that the same lower and upper values apply to all observations.
There are two other commands in Stata that allow you more flexibility in doing
regression with censored data.
cnreg estimates a model in which the censored values may vary from observation
to observation.
intreg estimates a model where the response variable for each observation is
either point data, interval data, left-censored data, or right-censored data.
4.3.2 Regression with Truncated Data
Truncated data occurs when some observations are not included in the analysis because
of the value of the variable. We will illustrate analysis with truncation using the
dataset, acadindx, that was used in the previous section. If acadindx is no
longer loaded in memory you can get it with the following use command.
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/acadindx
(max possible on acadindx is 200)
Let's imagine that in order to get into a special honors program, students need to
score at least 160 on acadindx. So we will drop all observations in which the value
of acadindx is less than 160.
drop if acadindx &= 160
(56 observations deleted)
Now, let's estimate the same model that we used in the section on censored data, only
this time we will pretend that a 200 for acadindx is not censored.
regress acadindx female reading writing
Number of obs =
-------------+------------------------------
Residual |
81.5454524
-------------+------------------------------
Adj R-squared =
136.301816
------------------------------------------------------------------------------
acadindx |
[95% Conf. Interval]
-------------+----------------------------------------------------------------
------------------------------------------------------------------------------
It is clear that the estimates of the coefficients are distorted due to the fact that
56 observations are no longer in the dataset. This amounts to restriction of range on both
the response variable and the predictor variables. For example, the coefficient for
writing dropped from .79 to .59. What this means is that if our goal is to find the
relation between acadindx and the predictor variables in the population, then the
truncation of acadindx in our sample is going to lead to biased estimates. A better
approach to analyzing these data is to use truncated regression. In Stata this can be
accomplished using the truncreg command where the ll option is used to
indicate the lower limit of acadindx scores used in the truncation.
truncreg acadindx female reading writing, ll(160)
(note: 0 obs. truncated)
Truncated regression
Number of obs =
Wald chi2(3)
Log likelihood = -510.00768
Prob & chi2
------------------------------------------------------------------------------
acadindx |
[95% Conf. Interval]
-------------+----------------------------------------------------------------
-------------+----------------------------------------------------------------
------------------------------------------------------------------------------
The coefficients from the truncreg command are closer to the OLS results, for
example the coefficient for writing is .77 which is closer to the OLS
results of .79.& However, the results are still somewhat different on the other
variables, for example the coefficient for reading is .52 in the truncreg
as compared to .72 in the original OLS with the unrestricted data, and better than the OLS
estimate of .47 with the restricted data.& While truncreg may
improve the estimates on a restricted data file as compared to OLS, it is certainly no
substitute for analyzing the complete unrestricted data file.
4.4 Regression with Measurement Error
As you will most likely recall, one of the assumptions of regression is that the
predictor variables are measured without error.
The problem is that measurement error in
predictor variables leads to under estimation of the regression coefficients.
Stata's eivreg
command takes measurement error into account when estimating the coefficients for the model.
Let's look at a regression using the hsb2 dataset.
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/hsb2
regress write read female
Number of obs =
---------+------------------------------
Residual |
50.8759077
---------+------------------------------
Adj R-squared =
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
The predictor read is a standardized test score. Every test has measurement error. We
don't know the exact reliability of read, but using .9 for the reliability would
probably not be far off. We will now estimate the same regression model with the Stata eivreg
command, which stands for errors-in-variables regression.
eivreg write read female, r(read .9)
errors-in-variables regression
reliability
------------------------
Number of obs =
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
Note that the F-ratio and the R2 increased along with the regression
coefficient for read. Additionally, there is an increase in the standard error for
Now, let's try a model with
socst as predictors. First, we will run a
standard OLS regression.
regress write read math socst female
Number of obs =
---------+------------------------------
Residual |
39.5136993
---------+------------------------------
Adj R-squared =
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Now, let's try to account for the measurement error by using the following
reliabilities: read - .9, math - .9, socst - .8.
eivreg write read math socst female, r(read .9 math .9 socst .8)
errors-in-variables regression
reliability
------------------------
Number of obs =
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Note that the overall F and R2 went up, but that the coefficient for read is
no longer statistically significant.
4.5 Multiple Equation Regression Models
If a dataset has enough variables we may want to estimate more than one regression model.
For example, we may want to predict y1 from x1 and also predict y2 from x2.
Even though there
are no variables in common these two models are not independent of one another because
the data come from the same subjects.
This is an example of one type of multiple equation regression
known as seemingly unrelated regression.
We can estimate the coefficients and obtain standard errors taking into account the correlated
errors in the two models.
An important feature of multiple equation models is that we can
test predictors across equations.
Another example of multiple equation regression is if we wished to predict y1, y2 and y3 from
x1 and x2.
This is a three equation system, known as multivariate regression, with the same
predictor variables for each model.
Again, we have the capability of testing coefficients across
the different equations.
Multiple equation models are a powerful extension to our data analysis tool kit.
4.5.1 Seemingly Unrelated Regression
Let's continue using the hsb2 data file to illustrate the use of
seemingly unrelated regression.& You can load it into memory again if it has been
cleared out.
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/hsb2
(highschool and beyond (200 cases))
This time let's look at two regression models.
science = math female
= read female
It is the case that the errors (residuals) from these two models would be correlated. This
would be true even if the predictor female were not found in both models. The errors would
be correlated because all of the values of the variables are collected on the same set of
observations. This is a situation tailor made for seemingly unrelated regression using the
sureg command. Here is our first model using OLS.
regress science math female
&some output omitted&
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
And here is our second model using OLS.
regress write read female
&some output omitted&
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
With the sureg command we can estimate both models simultaneously while
accounting for the correlated errors at the same time, leading to efficient estimates of
the coefficients and standard errors. By including the corr option with sureg
we can also obtain an estimate of the correlation between the errors of the two models.
Note that both the estimates of the coefficients and their standard errors are different
from the OLS model estimates shown above. The bottom of the output provides a
Breusch-Pagan test of
whether the residuals from the two equations are independent (in this case, we
would say the residuals were not independent, p=0.0407).
sureg (science math female) (write read female), corr
Seemingly unrelated regression
------------------------------------------------------------------
------------------------------------------------------------------
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Correlation matrix of residuals:
Breusch-Pagan test of independence: chi2(1) =
4.188, Pr = 0.0407
Now that we have estimated our models let's test the predictor variables. The test for female
combines information from both models. The tests for math and read are
actually equivalent to the z-tests above except that the results are displayed as
chi-square tests.
test female
[science]female = 0.0
[write]female = 0.0
Prob & chi2 =
[science]math = 0.0
Prob & chi2 =
[write]read = 0.0
Prob & chi2 =
Now, let's estimate 3 models where we use the same predictors in each model as shown
= female prog1 prog3
write = female prog1 prog3
= female prog1 prog3
If you no longer have the dummy variables for prog, you can recreate them using
the tabulate command.
tabulate prog, gen(prog)
Let's first estimate these three models using 3 OLS regressions.
regress read female prog1 prog3
&some output omitted&
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
regress write female prog1 prog3
&some output omitted&
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
regress math female prog1 prog3
&some output omitted&
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
These regressions provide fine estimates of the coefficients and standard errors but
these results assume the residuals of each analysis are completely independent of the
others. Also, if we wish to test female, we would have to do it three times and
would not be able to combine the information from all three tests into a single overall
Now let's use sureg to estimate the same models.& Since all 3 models have
the same predictors, we can use the syntax as shown below which says that read,
write and math will each be predicted by female,
prog1 and prog3. Note that the coefficients are identical
in the OLS results above and the sureg results below, however the
standard errors are different, only slightly, due to the correlation among the residuals
in the multiple equations.&
sureg (read write math = female prog1 prog3), corr
Seemingly unrelated regression
------------------------------------------------------------------
------------------------------------------------------------------
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
---------+--------------------------------------------------------------------
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Correlation matrix of residuals:
Breusch-Pagan test of independence: chi2(3) =
189.811, Pr = 0.0000
In addition to getting more appropriate standard errors, sureg allows
us to test the effects of the predictors across the equations.& We can test the
hypothesis that the coefficient for female is 0 for all three outcome
variables, as shown below.
test female
[read]female = 0.0
[write]female = 0.0
[math]female = 0.0
Prob & chi2 =
We can also test the hypothesis that the coefficient for female is 0
for just read and math. Note that [read]female
means the coefficient for female for the outcome variable read.
test [read]female [math]female
[read]female = 0.0
[math]female = 0.0
Prob & chi2 =
We can also test the hypothesis that the coefficients for prog1 and prog3
are 0 for all three outcome variables, as shown below.
test prog1 prog3
[read]prog1 = 0.0
[write]prog1 = 0.0
[math]prog1 = 0.0
[read]prog3 = 0.0
[write]prog3 = 0.0
[math]prog3 = 0.0
Prob & chi2 =
4.5.2 Multivariate Regression
Let's now use multivariate regression using the mvreg command to look
at the same analysis that we saw in the sureg example above,
estimating the following 3 models.
= female prog1 prog3
write = female prog1 prog3
= female prog1 prog3
If you don't have the hsb2 data file in memory, you can use it below
and then create the dummy variables for prog1 - prog3.
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/hsb2
tabulate prog, gen(prog)
&output omitted&
Below we use mvreg to predict read,
write and math
from female,
prog1 and prog3. Note that the top part of
the output is similar to the sureg output in that it gives an overall
summary of the model for each outcome variable, however the results are somewhat different
and the sureg uses a Chi-Square test for the overall fit
of the model, and mvreg uses an F-test. The lower part
of the output appears similar to the sureg however, when you
compare the standard errors you see that the results are not the same.& These standard errors
correspond to the OLS standard errors, so these results below do not take into account the
correlations among the residuals (as do the sureg results).
mvreg read write math = female prog1 prog3
------------------------------------------------------------------
------------------------------------------------------------------------------
[95% Conf. Interval]
---------+--------------------------------------------------------------------
---------+--------------------------------------------------------------------
---------+--------------------------------------------------------------------
------------------------------------------------------------------------------
Now, let's test female. Note, that female was statistically significant
in only one of the three equations. Using the test command after mvreg allows us to
test female across all three equations simultaneously. And, guess what? It is
significant. This is consistent with what we found using sureg (except
that sureg did this test using a Chi-Square test).
test female
[read]female = 0.0
[write]female = 0.0
[math]female = 0.0
Prob & F =
We can also test prog1 and prog3, both separately and combined. Remember
these are multivariate tests.
test prog1
[read]prog1 = 0.0
[write]prog1 = 0.0
[math]prog1 = 0.0
Prob & F =
test prog3
[read]prog3 = 0.0
[write]prog3 = 0.0
[math]prog3 = 0.0
Prob & F =
test prog1 prog3
[read]prog1 = 0.0
[write]prog1 = 0.0
[math]prog1 = 0.0
[read]prog3 = 0.0
[write]prog3 = 0.0
[math]prog3 = 0.0
Prob & F =
Many researchers familiar with traditional multivariate analysis may not recognize the
tests above. They don't see Wilks' Lambda, Pillai's Trace or the Hotelling-Lawley Trace
statistics, statistics that they are familiar with. It is possible to obtain these
statistics using the mvtest command written by David E. Moore of the University of
Cincinnati. mvtest
, which UCLA updated to work with Stata 6 and above,
can be downloaded over the internet like this.
net from http://www.ats.ucla.edu/stat/stata/ado/analysis
net install mvtest
Now that we have downloaded it, we can use it like this.
mvtest female
MULTIVARIATE TESTS OF SIGNIFICANCE
Multivariate Test Criteria and Exact F Statistics for
the Hypothesis of no Overall &female& Effect(s)
Wilks' Lambda
Pillai's Trace
Hotelling-Lawley Trace
mvtest prog1 prog3
MULTIVARIATE TESTS OF SIGNIFICANCE
Multivariate Test Criteria and Exact F Statistics for
the Hypothesis of no Overall &prog1 prog3& Effect(s)
Wilks' Lambda
Pillai's Trace
Hotelling-Lawley Trace
We will end with an mvtest including all of the predictor variables. This is an
overall multivariate test of the model.
mvtest female prog1 prog3
MULTIVARIATE TESTS OF SIGNIFICANCE
Multivariate Test Criteria and Exact F Statistics for
the Hypothesis of no Overall &female prog1 prog3& Effect(s)
Wilks' Lambda
Pillai's Trace
Hotelling-Lawley Trace
The sureg and mvreg commands both allow you to test
multi-equation models while taking into account the fact that the equations are not
independent.& The sureg command allows you to get estimates for each
equation which adjust for the non-independence of the equations, and it allows you to
estimate equations which don't necessarily have the same predictors. By contrast, mvreg
is restricted to equations that have the same set of predictors, and the estimates it
provides for the individual equations are the same as the OLS estimates.& However, mvreg
(especially when combined with mvtest) allows you to perform more
traditional multivariate tests of predictors.
4.6 Summary
This chapter has covered a variety of topics that go beyond ordinary least
squares regression, but there still remain a variety of topics we wish we could
have covered, including the analysis of survey data, dealing with missing data,
panel data analysis, and more. And, for the topics we did cover, we wish we
could have gone into even more detail. One of our main goals for this chapter
was to help you be aware of some of the techniques that are available in Stata
for analyzing data that do not fit the assumptions of OLS regression and some of
the remedies that are possible. If you are a member of the UCLA research
community, and you have further questions, we invite you to use our
to discuss issues specific to your data analysis.
4.7 Self Assessment&
1. Use the crime data file that was used in chapter 2 (use
http://www.ats.ucla.edu/stat/stata/webbooks/reg/crime ) and look at a regression model
predicting murder from pctmetro,
and single using OLS and make a avplots and a lvr2plot
following the regression. Are there any states that look worrisome? Repeat this analysis
using regression with robust standard errors and show avplots
for the analysis. Repeat the analysis using robust regression and make a
manually created lvr2plot. Also run the results using qreg.
Compare the results of the different analyses. Look at the weights from the
robust regression and comment on the weights.
2. Using the elemapi2 data file (use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2
) pretend that 550 is the lowest score that a school could achieve on api00,
i.e., create a new variable with the api00 score and recode it
such that any score of 550 or below becomes 550. Use meals, ell
and emer to predict api scores using 1) OLS to predict the
original api score (before recoding) 2) OLS to predict the recoded score where
550 was the lowest value, and 3) using tobit to predict the
recoded api score indicating the lowest value is 550. Compare the results of
these analyses.
3. Using the elemapi2 data file (use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2
) pretend that only schools with api scores of 550 or higher were included in
the sample. Use meals, ell and emer
to predict api scores using 1) OLS to predict api from the full set of
observations, 2) OLS to predict api using just the observations with api scores
of 550 or higher, and 3) using truncreg to predict api using
just the observations where api is 550 or higher. Compare the results of these
4. Using the hsb2 data file (use http://www.ats.ucla.edu/stat/stata/webbooks/reg/hsb2
) predict read from science,
math and write.
Use the testparm and test commands to test
the equality of the coefficients for science, socst
and math. Use cnsreg to estimate a model where
these three parameters are equal.
5. Using the elemapi2 data file (use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2
) consider the following 2 regression equations.
api00 = meals ell emer
api99 = meals ell emer
Estimate the coefficients for these predictors in predicting api00
and api99 taking into account the non-independence of the
schools. Test the overall contribution of each of the predictors in jointly
predicting api scores in these two years. Test whether the contribution of emer
is the same for api00 and api99.
answers to these self assessment questions.
4.8 For more information
Stata Manuals
[R] cnsreg
[R] truncreg
[R] eivreg
[U] 23 Estimation and post-estimation commands
[U] 29 Overview of model estimation in Stata
The content of this web site should not be construed as an endorsement
of any particular web site, book, or software product by the
University of California.

我要回帖

更多关于 药是饭前吃还是饭后吃 的文章

 

随机推荐