simpleadapter是什么-evaluation是什么意思

Wanted: A Simple Measure of Success in a Complex World
Wanted: A Simple Measure of Success in a Complex World
By:&Caroline Heider
Speaking at the World Bank, Ben Ramalingam, author of
set out a challenge to those working in the humanitarian and development fields: Move away from a narrow focus on what we think is important and take a more wide-angle approach to the issues we’re dealing with.
“Responses to complex challenges need to be adaptive,” he said. “Rather than strategies for best practice we should be looking at strategies for best fit.”
Ben discussed his plans to push the debate on change in the development system during a lunch we had about two years ago where we &continued a conversation we had started years earlier when both of us worked on evaluating humanitarian assistance.
We both agreed that in a world riddled by unpredictability the usefulness of linear models was limited and that there was a pressing need for rethinking how development assistance works.
Ben’s book argues that our models are based on simplifications that make assumptions, eliminate real life factors, and that fail to reflect that the world is a complex maze of interrelationships that affect each other.
Tools like the logical framework, as Ben says, can if “[d]one right...make[s] users think carefully and systematically about their plans, and how activities will contribute to goals.” Drawing on many evaluations, he observes, that the tool is often used mechanistically: results-based management and M&E systems are typically focused at input-output level and are based on linear relations that ignore rather than recognize complexity. &
Is the tool to blame? You might think this is a funny question for an evaluator to ask. &After all, don’t we use logframes as the basis for our assessments? Yet, as people struggle to put together meaningful results frameworks, the question is inescapable.
Arguments that the logframe is too limiting, that it doesn’t take into account other factors, or cater to the complexity of situations are true. But only in part.
The tool actually requires planners to clarify their assumptions and assess risks. In other words: think about a networked and chaotic reality and choose a more linear set of goals, objectives, and outputs. Without it, one is left trying to develop from first principles what might be the appropriate systems for adequate planning, learning and evaluation under complex circumstances.
For us at the World Bank Group the challenge is two-fold:
- We understand the world is complex. The new model – the Solutions Bank Group – has been conceived precisely to correspond to this reality and aims to bring about transformational change in how we work.&
- To support these changes, we need practical measures to demonstrate – and evaluate independently – whether multi-dimensional development solutions are working, what changes they bring about, and how problems are fixed as they arise.
So how do we get there and what are the risks? Three stand out:
Oversimplification. Past experience is riddled with examples of results that are simply outputs. Take road construction. The simplest measure is the distance that has been built. But, how will this tell us what the road will achieve? &In an earlier role I have evaluated road projects. Some resulted in transformational change, economic empowerment and in reduction roadside robberies. Others ended up as the roads less traveled with no economic or social value. &So what then is a measure that is simple enough to add up and yet meaningful enough to tell us about results?
Over-Abstraction.& Wouldn’t it be great if we had a simple index that tells us whether things are improving or not? It’s a very seductive thought - a number that indicates how well or how badly things are going. But, will the new construct again revert to simplifying models – the ones that Ben points to as the crux of the development matter – in order to make it possible to capture in a number what is a complex process?
&Undefined. So, if an iterative learning process is more appropriate in this age of complexity, should we not simply leave our targets undefined and figure things out as we go along? If so, how would we manage the risk, aptly discussed in Ben’s book, of errors that might creep in because we are unaware of our assumptions, have a tendency to simplify models, and to repeatedly follow the same path? How will we know if we are wasting valuable time, effort, and resources instead of investing in them effectively?
During the next week’s Spring Meetings, we will be sponsoring a panel of eminent thinkers and posing this challenge to them so that we can take a practical approach to the new science of delivery. I urge you to make your voice heard. 上传我的文档
 下载
 收藏
该文档贡献者很忙,什么也没留下。
 下载此文档
正在努力加载中...
浅议科学家对《科学革命的结构》的评价及其意义
simple discussion on the evaluatio.
下载积分:1500
内容提示:浅议科学家对《科学革命的结构》的评价及其意义
simple discussion on the evaluation of the scientists to structure of scientific r
文档格式:PDF|
浏览次数:2|
上传日期: 09:45:49|
文档星级:
该用户还上传了这些文档
浅议科学家对《科学革命的结构》的评价及其意义
官方公共微信A simple model for straggling evaluation.
- PubMed - NCBI
The NCBI web site requires JavaScript to function.
FormatSummarySummary (text)AbstractAbstract (text)MEDLINEXMLPMID ListChoose DestinationFileClipboardCollectionsE-mailOrderMy BibliographyCitation managerFormatSummary (text)Abstract (text)MEDLINEXMLPMID ListCSVCreate File1 selected item: FormatSummarySummary (text)AbstractAbstract (text)MEDLINEXMLPMID ListMeSH and Other DataE-mailSubjectAdditional textE-mailAdd to ClipboardAdd to CollectionsOrder articlesAdd to My BibliographyGenerate a file for use with external citation management software.Create File
):389-92.A simple model for straggling evaluation.1, , , .1NASA Langley Research Center, Hampton, VA , USA. john.w.wilson@larc.nasa.govAbstractSome straggling models had largely been abandoned in favor of Monte Carlo simulations of straggling which are accurate but time consuming, limiting their application in practice. The difficulty of simple analytic models is the failure to give accurate values past 85% of the particle range. A simple model is derived herein based on a second order approximation upon which rapid analysis tools are developed for improved understanding of material charged particle transmission properties.Published by Elsevier Science B.V.PMID:
[PubMed - indexed for MEDLINE]
MeSH TermsSubstancesOther Literature SourcesMiscellaneous
Supplemental Content
External link. Please review our ."Portals and Mirrors" paper
Portals and Mirrors:
Simple, Fast Evaluation of Potentially Visible Sets
David P. Luebke and Chris Georges
Department of Computer Science
University of North Carolina at Chapel Hill
View from the master bedroom of the Brooks House showing
cull boxes for portals (white) and mirrors (red).
Overhead view of the Brooks House, showing portal culling frustums
active in Plate 1 (mirror frustum shown in red).
We describe an approach for determining potentially visible
sets in dynamic architectural models. Our scheme divides the
models into cells and portals, computing a conservative estimate
of which cells are visible at render time. The technique is simple
to implement and can be easily integrated into existing systems,
providing increased interactive performance on large architectural models.
Introduction
Architectural models typically exhibit high depth complexity
paired with heavy occlusion. The ratio of objects actually
visible to the viewer (not occluded by walls) to objects theoretically
visible to the viewer (intersecting the view frustum) will
usually be small in a walkthrough situation. A visibility algorithm
aimed at reducing the number of primitives rendered can exploit
this property. Following prior work [1,2,3], we make use of a
subdivision that divides such models along the occluding primitives
into "cells" and "portals". A cell is a polyhe
a portal is a transparent 2D region upon a cell boundary that
connects adjacent cells. Cells can only "see" other cells through the
portals. In an architectural model, the cell boundaries should
follow the walls and partitions, so that cells roughly correspond to
the rooms of the building. The portals likewise correspond to the
doors and windows through which neighboring rooms can view
each other.
Given such a spatial partitioning of the model, we can
determine each frame what cells may be visible to the viewer. By
traversing only the cells in this potentially visible set (PVS), we
can avoid submitting occluded portions of the model to the
graphics pipeline. What cells comprise the PVS? Certainly the cell
containing the viewpoint is potentially visible. Those neighboring
cells which share a portal with the initial cell must also be
counted as potentially visible, since the viewer could see those
cells through the portal. To this we add those cells visible through
the portals of these neighbors, and so on. In this manner the
problem of determining what cells are potentially visible to the viewer
reduces to the problem of determining what portals are visible
through the portals of the viewer's cell.
Our system makes this determination dynamically at render
time. Rather than finding the exact PVS for each cell as a
preprocess, we postpone the visibility computation as long as possible.
This type of dynamic evaluation of portal-portal visibility is not
new. Earlier efforts have centered on precisely determining
sightl our method offers a less exact but much
simpler alternative. The algorithm has been implemented on the
Pixel-Planes 5 graphics computer at the University of North
Carolina and provides a substantial speedup on a sample model of
50,000 polygons.
Previous Work
Jones [1] explored the subdivision of geometry into cells and
portals as a technique for hidden line removal. In his algorithm,
models are manually subdivided into convex polyhedral cells and
convex polygonal portals. The subdivision is complete in the
sense that every polygon in the dataset is embedded in the face of
one or more cells. Rendering begins by drawing the walls and
portals of the cell containing the viewer. As each portal is drawn, the
cell on the opposite side of the portal is recursively rendered. In
this way the cell adjacency graph defined by the partitioning is
traversed in depth-first fashion. The portal sequence through
which the current cell is being rendered comprises a convex
"mask" to which the contents of the cell are clipped. If the
intersection of a portal with the current mask is empty, the portal is
invisible and the attached cell need not be traversed.
More recent work has abandoned the attempt to compute
exact visibility information, focusing instead on computing a
conservative PVS of objects that may be visible from the viewer's
cell. The graphics pipeline then uses standard Z-buffer techniques
to resolve exact visibility. Airey [2] was the first to use a
portal-based approach effective in architectural environments. He
described multiple ways to approach the problem of determining
cell-to-cell visibility, including ray-casting and shadow volumes.
Teller [3] has taken the concept further and found a closed-form,
analytic solution to the portal-portal visibility problem. Using 2D
linear programming to test portal sequences against arbitrary
visibility beams, Teller computes a complete set of cell-to-cell and
cell-to-object visibilities in a preprocess. At render time this PVS
is further restricted according to which portals are actually
visible. Teller's approach is mathematically and computationally
complex, requiring hours of preprocess time for large models [3].
Motivation
Such a large preprocessing cost may be inappropriate for
interactive applications. For example, architectural walkthroughs
are often used for revision purposes. A visualization of a building
under design is more valuable to an architect if inquiries of the
type "What if I move this wall out ten feet?" can be answered
immediately. Adding portals, moving portals, and redistributing
cells boundaries will all be common operations in an interactive
architectural design application. To take full advantage of the
static visibility schemes mentioned above, each of these would
require a potentially lengthy PVS recalculation best done off-line.
Envisioning such an application as our final goal, we
decided to focus on improving the dynamic visibility determination.
Jones' algorithm finds the exact intersection of 2D convex
regions, requiring O(n lg n) time for portal sequences with n
edges. Teller's linear programming approach computes only the
existence of an intersection, and runs in time linear in the number
of edges. We sought a dynamic solution that would also run in linear
time and would integrate easily into existing systems.
Faster Dynamic PVS Evaluation
We use a variation of Jones' approach that employs bounding boxes
instead of general convex regions. Our scheme first
projects the vertices of each portal into screen-space and takes the
axial 2D bounding box of the resulting points. This 2D box,
called the cull box, represents a conservative
that is, objects whose screenspace projection falls entirely outside
the cull box are guaranteed not to be visible through the portal
and may be safely culled away. As each successive portal is
traversed, its box is intersected with the aggregate cull box using
only a few comparisons.
During traversal the contents of each cell are tested for visibility
through the current portal sequence by comparing the
screenspace projection of each object's bounding box against the
intersected cull box of all portals in the sequence. If the projected
bounding box intersects the aggregate cull box, the object is
potentially visible through the portals and must be rendered.
Since a single object may be visible through multiple portal
sequences, we tag each object as we render it. This object-level
culling lets us avoid rendering objects more than once per frame.
Alternatively, we can render each object once for every portal
sequence which admits a view of the object, but clip the actual
primitives to the aggregate cull box of each sequence. Under this
primitive-level clipping scheme objects may be visited more than
once, but since the portal boundaries do not overlap, no portion of
any primitive will be rendered twice. Typically object-level
culling will prove more efficient, but for objects whose per-primitive
rendering cost far exceeds their clipping cost, primitive-level
clipping provides a viable option.
Implementation
We have implemented this approach on Pixel-Planes 5, the
custom graphics multicomputer developed at the University of
North Carolina. The traversal mechanism treats portals as primitives
to be rendered. Each portal consists of a polygonal boundary
and a pointer
when a portal is encountered
during traversal we test its axial screenspace bounding box
against the current aggregate cull box. If the intersection is
non-empty, we use it as the new aggregate cull box and recursively
traverse the connected cell.
We feel that modeler integration is crucial to this problem of
interactive model revision. If an architect wishes to move a wall
or broaden a doorway, the modeling system should be able to
make the change quickly and broadcast that change to the
graphics system. In our system the spatial partitioning of the model
into cells and portals is directly embedded in the modeler's
representation. Portals are treated as augmented polygons, each tagged
with the name of the attached cell. Cells are simply logical
groupings in the modeler's hierarchy and need not necessarily be
convex. We have found this quite convenient when constructing
each room typically corresponds to a cell and it takes
only seconds to add and move a portal, or to reshape a cell. We
have already adapted two commercial modelers to our system,
which speaks to the simplicity of the integration process.
We have tested our system on a subset of the UNC Walkthrough
project's model of Professor Fred Brooks' house,
comprised of 367,000 radiositized triangles. The speedup
obtained by this visibility algorithm, like the speedup obtained by
similar schemes, is extremely view-and model-dependent. Over a
500-frame test path through the model, the frame rate using PVS
evaluation ranged from just over 1 to almost 10 times the frame
rate of the entire unculled model. For typical views the dynamic
PVS evaluation culled away 20% to 50% of the model. It should
be emphasized again that these numbers are specific to the model
and view path, but they certainly indicate the promise of the
algorithm as a simple, effective acceleration technique.
Ongoing and Future Work
Efficiency could be further increased by applying obscuration
culling to portals [4]. This scheme tests potentially visible
items against an "almost complete" Z-buffer before rendering.
This would allow the `detail' objects in each cell as well as the
occluding cell walls to block portals, potentially reducing the
PVS. The Pixel-Planes architecture makes obscuration culling of
portals feasible, and we are currently exploring this possibility.
Teller mentions that the concept of portals may be extended
to mirrors [3]. Under this scheme mirrors are treated as portals
which transform the attached cell about the
this has the advantage of automatically restricting the PVS seen
through the mirror. Though conceptually simple, mirrors introduce
many practical difficulties which require additional clipping
by the rendering engine to resolve. For example, geometry behind
the mirror must not appear in its reflected "world," and reflected
geometry must not appear in front or to the side of the mirror.
A special case that avoids these problems can be constructed
by embedding the mirror in an opaque cell boundary (for
example, a wall-mounted mirror in a bathroom), and we have
implemented such mirrors (Plate 1). The concept of an immovable
mirror fits poorly with our goal of interactive, dynamic
environments, however, so we have focused on the more general
case. Clipping is complicated further by mirrors that overlap in
screenspace, and further still by mirrors which recursively reflect
other mirrors. At present our system allows static mirrors, which
can reflect each other to arbitrary levels of recursion, or more
general "hand-held" mirrors, (an example of free-moving portals),
which permit one-bounce reflections. We are currently working
on the dynamic, fully recursive case.
Acknowledgments
The authors would like to extend their sincere thanks to
Mike Goslin, Hans Weber, Power P. Ponamgi, Peggy Wetzel,
and Stump Brady. This work was supported by ARPA Contract
DABT63-93-C-C048.
References
[1] Jones, C.B. A New Approach to the `Hidden Line' Problem.
The Computer Journal, vol. 14 no. 3 (August 1971), 232..
[2] Airey, John. Increasing Update Rates in the Building
Walkthrough System with Automatic Model-Space Subdivision
and Potentially Visible Set Calculations.
Ph.D. thesis, UNC-CH CS Department TR #90-027 (July 1990).
[3] Teller, Seth.
Visibility Computation in Densely Occluded
Polyhedral Environments.
Ph.D. thesis, UC Berkeley CS
Department, TR #92/708 (1992).
[4] Greene, Ned, Kass, Michael, and Miller, Gavin.
Hierarchical
Z-Buffer Visibility.
Proceedings of SIGGRAPH `93
(Anaheim, California 1993). In Computer Graphics Proceedings,
Annual Conference Series, 1993, ACM SIGGRAPH, New York 1993, pp. 59-66.
David P. Luebke
Assistant Professor
Olsson Hall #219
Charlottesville, VA 22903
(804) 924-1021 上传我的文档
 下载
 收藏
该文档贡献者很忙,什么也没留下。
 下载此文档
正在努力加载中...
一类简单网络生产系统的dea效率评价模型
dea-efficiency evaluation model for simpl.
下载积分:1000
内容提示:一类简单网络生产系统的dea效率评价模型
dea-efficiency evaluation model for simple network production systems
文档格式:PDF|
浏览次数:0|
上传日期: 11:39:14|
文档星级:
该用户还上传了这些文档
一类简单网络生产系统的dea效率评价模型
dea-efficie
官方公共微信

我要回帖

更多关于 simple什么意思 的文章

 

随机推荐