1……?&……:OB-):-!;-):-[:-\:-$B-)

From Wikipedia, the free encyclopedia
(Redirected from )
Lempel–Ziv–Welch (LZW) is a universal
created by , , and . It was published by Welch in 1984 as an improved implementation of the
algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement, and has the potential for very high throughput in hardware implementations. It was the algorithm of the widely used
file compression utility , and is used in the
image format.
The scenario described by Welch's 1984 paper encodes sequences of 8-bit data as fixed-length 12-bit codes. The codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. At each stage in compression, input bytes are gathered into a sequence until the next character would make a sequence for which there is no code yet in the dictionary. The code for the sequence (without that character) is added to the output, and a new code (for the sequence with that character) is added to the dictionary.
The idea was quickly adapted to other situations. In an image based on a color table, for example, the natural character alphabet is the set of color table indexes, and in the 1980s, many images had small color tables (on the order of 16 colors). For such a reduced alphabet, the full 12-bit codes yielded poor compression unless the image was large, so the idea of a variable-width code was introduced: codes typically start one bit wider than the symbols being encoded, and as each code size is used up, the code width increases by 1 bit, up to some prescribed maximum (typically 12 bits).
Further refinements include reserving a code to indicate that the code table should be cleared (a "clear code", typically the first value immediately after the values for the individual alphabet characters), and a code to indicate the end of data (a "stop code", typically one greater than the clear code). The clear code allows the table to be reinitialized after it fills up, which lets the encoding adapt to changing patterns in the input data. Smart encoders can monitor the compression efficiency and clear the table whenever the existing table no longer matches the input well.
Since the codes are added in a manner determined by the data, the decoder mimics building the table as it sees the resulting codes. It is critical that the encoder and decoder agree on which variety of LZW is being used: the size of the alphabet, the maximum code width, whether variable-width encoding is being used, the initial code size, whether to use the clear and stop codes (and what values they have). Most formats that employ LZW build this information into the format specification or provide explicit fields for them in a compression header for the data.
A high level view of the encoding algorithm is shown here:
Initialize the dictionary to contain all strings of length one.
Find the longest string W in the dictionary that matches the current input.
Emit the dictionary index for W to output and remove W from the input.
Add W followed by the next symbol in the input to the dictionary.
Go to Step 2.
A dictionary is initialized to contain the single-character strings corresponding to all the possible input characters (and nothing else except the clear and stop codes if they're being used). The algorithm works by scanning through the input string for successively longer substrings until it finds one that is not in the dictionary. When such a string is found, the index for the string without the last character (i.e., the longest substring that is in the dictionary) is retrieved from the dictionary and sent to output, and the new string (including the last character) is added to the dictionary with the next available code. The last input character is then used as the next starting point to scan for substrings.
In this way, successively longer strings are registered in the dictionary and made available for subsequent encoding as single output values. The algorithm works best on data with repeated patterns, so the initial parts of a message will see little compression. As the message grows, however, the
tends asymptotically to the maximum.[]
The decoding algorithm works by reading a value from the encoded input and outputting the corresponding string from the initialized dictionary. In order to rebuild the dictionary in the same way as it was built during encoding, it also obtains the next value from the input and adds to the dictionary the
of the current string and the first character of the string obtained by decoding the next input value, or the first character of the string just output if the next value can not be decoded (If the next value is unknown to the decoder, then it must be the value that will be added to the dictionary this iteration, and so its first character must be the same as the first character of the current string being sent to decoded output). The decoder then proceeds to the next input value (which was already read in as the "next value" in the previous pass) and repeats the process until there is no more input, at which point the final input value is decoded without any more additions to the dictionary.
In this way the decoder builds up a dictionary which is identical to that used by the encoder, and uses it to decode subsequent input values. Thus the full dictionary does not need be sent w just the initial dictionary containing the single-character strings is sufficient (and is typically defined beforehand within the encoder and decoder rather than being explicitly sent with the encoded data.)
If variable-width codes are being used, the encoder and decoder must be careful to change the width at the same points in the encoded data, or they will disagree about where the boundaries between individual codes fall in the stream. In the standard version, the encoder increases the width from p to p + 1 when a sequence ω + s is encountered that is not in the table (so that a code must be added for it) but the next available code in the table is 2p (the first code requiring p + 1 bits). The encoder emits the code for ω at width p (since that code does not require p + 1 bits), and then increases the code width so that the next code emitted will be p + 1 bits wide.
The decoder is always one code behind the encoder in building the table, so when it sees the code for ω, it will generate an entry for code 2p - 1. Since this is the point where the encoder will increase the code width, the decoder must increase the width here as well: at the point where it generates the largest code that will fit in p bits.
Unfortunately some early implementations of the encoding algorithm increase the code width and then emit ω at the new width instead of the old width, so that to the decoder it looks like the width changes one code too early. This is called "Early Change"; it caused so much confusion that Adobe now allows both versions in PDF files, but includes an explicit flag in the header of each LZW-compressed stream to indicate whether Early Change is being used. Out of graphics file formats capable of using LZW compression,
uses early change, while
and most others don't.
When the table is cleared in response to a clear code, both encoder and decoder change the code width after the clear code back to the initial code width, starting with the code immediately following the clear code.
Since the codes emitted typically do not fall on byte boundaries, the encoder and decoder must agree on how codes are packed into bytes. The two common methods are LSB-First ("Least Significant Bit First") and MSB-First ("Most Significant Bit First"). In LSB-First packing, the first code is aligned so that the least significant bit of the code falls in the least significant bit of the first stream byte, and if the code has more than 8 bits, the high order bits left over are aligned with the least significant b further codes are packed with LSB going into the least significant bit not yet used in the current stream byte, proceeding into further bytes as necessary. MSB-first packing aligns the first code so that its most significant bit falls in the MSB of the first stream byte, with overflow aligned with the MSB further codes are written with MSB going into the most significant bit not yet used in the current stream byte.
GIF files use LSB-First packing order. TIFF files and PDF files use MSB-First packing order.
The following example illustrates the LZW algorithm in action, showing the status of the output and the
at every stage, both in encoding and decoding the data. This example has been constructed to give reasonable compression on a very short message. In real text data, repetition is generally less pronounced, so longer input streams are typically necessary before the compression builds up efficiency.
The plaintext to be encoded (from an alphabet using only the capital letters) is:
TOBEORNOTTOBEORTOBEORNOT#
The # is a marker used to show that the end of the message has been reached. There are thus 26 symbols in the plaintext alphabet (the 26 capital letters A through Z), plus the stop code #. We arbitrarily assign these the values 1 through 26 for the letters, and 0 for '#'. (Most flavors of LZW would put the stop code after the data alphabet, but nothing in the basic algorithm requires that. The encoder and decoder only have to agree what value it has.)
A computer will render these as strings of . Five-bit codes are needed to give sufficient combinations to encompass this set of 27 values. The dictionary is initialized with these 27 values. As the dictionary grows, the codes will need to grow in width to accommodate the additional entries. A 5-bit code gives 25 = 32 possible combinations of bits, so when the 33rd dictionary word is created, the algorithm will have to switch at that point from 5-bit strings to 6-bit strings (for all code values, including those which were previously output with only five bits). Note that since the all-zero code 00000 is used, and is labeled "0", the 33rd dictionary entry will be labeled 32. (Previously generated output is not affected by the code-width change, but once a 6-bit value is generated in the dictionary, it could conceivably be the next code emitted, so the width for subsequent output shifts to 6 bits to accommodate that.)
The initial dictionary, then, will consist of the following entries:
Buffer input characters in a sequence ω until ω + next character is not in the dictionary. Emit the code for ω, and add ω + next character to the dictionary. Start buffering again with the next character. (The string to be encoded is "TOBEORNOTTOBEORTOBEORNOT#".)
Current Sequence
Extended Dictionary
27 = first available code after 0 through 26
32 requires 6 bits, so for next output use 6 bits
# send the cur seq
and the stop code
Unencoded length = 25 symbols × 5 bits/symbol = 125 bits
Encoded length = (6 codes × 5 bits/code) + (11 codes × 6 bits/code) = 96 bits.
Using LZW has saved 29 bits out of 125, reducing the message by almost 22%. If the message were longer, then the dictionary words would begin to represent longer and longer sections of text, allowing repeated words to be sent very compactly.
To decode an LZW-compressed archive, one needs to know in advance the initial dictionary used, but additional entries can be reconstructed as they are always simply
of previous entries.
Output Sequence
New Dictionary Entry
Conjecture
created code 31 (last to fit in 5 bits)
so start reading input at 6 bits
36 = TO + 1st symbol (B) of
next coded sequence received (BE)
At each stage, the decoder receives a code X; it looks X up in the table and outputs the sequence χ it codes, and it conjectures χ + ? as the entry the encoder just added – because the encoder emitted X for χ precisely because χ + ? was not in the table, and the encoder goes ahead and adds it. But what is the missing letter? It is the first letter in the sequence coded by the next code Z that the decoder receives. So the decoder looks up Z, decodes it into the sequence ω and takes the first letter z and tacks it onto the end of χ as the next dictionary entry.
This works as long as the codes received are in the decoder's dictionary, so that they can be decoded into sequences. What happens if the decoder receives a code Z that is not yet in its dictionary? Since the decoder is always just one code behind the encoder, Z can be in the encoder's dictionary only if the encoder just generated it, when emitting the previous code X for χ. Thus Z codes some ω that is χ + ?, and the decoder can determine the unknown character as follows:
The decoder sees X and then Z.
It knows X codes the sequence χ and Z codes some unknown sequence ω.
It knows the encoder just added Z to code χ + some unknown character,
and it knows that the unknown character is the first letter z of ω.
But the first letter of ω (= χ + ?) must then also be the first letter of χ.
So ω must be χ + x, where x is the first letter of χ.
So the decoder figures out what Z codes even though it's not in the table,
and upon receiving Z, the decoder decodes it as χ + x, and adds χ + x to the table as the value of Z.
This situation occurs whenever the encoder encounters input of the form cScSc, where c is a single character, S is a string and cS is already in the dictionary, but cSc is not. The encoder emits the code for cS, putting a new code for cSc into the dictionary. Next it sees cSc in the input (starting at the second c of cScSc) and emits the new code it just inserted. The argument above shows that whenever the decoder receives a code not in its dictionary, the situation must look like this.
Although input of form cScSc might seem unlikely, this pattern is fairly common when the input stream is characterized by significant repetition. In particular, long strings of a single character (which are common in the kinds of images LZW is often used to encode) repeatedly generate patterns of this sort.
The simple scheme described above focuses on the LZW algorithm itself. Many applications apply further encoding to the sequence of output symbols. Some package the coded stream as printable characters this will increase the encoded length and decrease the compression rate. Conversely, increased compression can often be achieved with an adaptive entropy encoder. Such a coder estimates the probability distribution for the value of the next symbol, based on the observed frequencies of values so far. A standard entropy encoding such as
then uses shorter codes for values with higher probabilities.
LZW compression became the first widely used universal data compression method on computers. A large
text file can typically be compressed via LZW to about half its original size.
LZW was used in the public-domain program , which became a more or less standard utility in
systems circa 1986. It has since disappeared from many distributions, both because it infringed the LZW patent and because
produced better compression ratios using the LZ77-based
algorithm, but as of 2008 at least FreeBSD includes both
as a part of the distribution. Several other popular compression utilities also used LZW, or closely related methods.
LZW became very widely used when it became part of the
image format in 1987. It may also (optionally) be used in
files. (Although LZW is available in
software, Acrobat by default uses
for most text and color-table-based image data in PDF files.)
Main article:
have been issued in the
and other countries for LZW and similar algorithms. LZ78 was covered by
by Lempel, Ziv, Cohn, and Eastman, assigned to , later
Corporation, filed on August 10, 1981. Two US patents were issued for the LZW algorithm:
and assigned to , originally filed on June 1, 1983, and
by Welch, assigned to Sperry Corporation, later Unisys Corporation, filed on June 20, 1983.
In , and again in 1999, Unisys Corporation received widespread condemnation when it attempted to enforce licensing fees for LZW in GIF images. The
Unisys-Compuserve (Compuserve being the creator of the GIF format) controversy engendered a Usenet comp.graphics discussion Thoughts on a GIF-replacement file format, which in turn fostered an email exchange that eventually culminated in the creation of the patent-unencumbered
(PNG) file format in 1995.
Unisys's US patent on the LZW algorithm expired on June 20, 2003, 20 years after it had been filed. Patents that had been filed in the United Kingdom, France, Germany, Italy, Japan and Canada all expired in 2004, likewise 20 years after they had been filed.
LZMW (1985, by V. Miller, M. Wegman) – Searches input for the longest string already in the dictionary (the "current" match); adds the concatenation of the previous match with the current match to the dictionary. (Dictionary entries th but this scheme is much more complicated to implement.) Miller and Wegman also suggest deleting low frequency entries from the dictionary when the dictionary fills up.
LZAP (1988, by James Storer) – modification of LZMW: instead of adding just the concatenation of the previous match with the current match to the dictionary, add the concatenations of the previous match with each initial substring of the current match. ("AP" stands for "all prefixes".) For example, if the previous match is "wiki" and current match is "pedia", then the LZAP encoder adds 5 new sequences to the dictionary: "wikip", "wikipe", "wikiped", "wikipedi", and "wikipedia", where the LZMW encoder adds only the one sequence "wikipedia". This eliminates some of the complexity of LZMW, at the price of adding more dictionary entries.
is a syllable-based variant of LZW.
(PDF). Computer 17 (6): 8–19. :.
Ziv, J.; Lempel, A. (1978).
(PDF). IEEE Transactions on Information Theory 24 (5): 530. :.
David Salomon, Data Compression – The complete reference, 4th ed., page 209
David Salomon, Data Compression – The complete reference, 4th ed., page 212
, Terry A. Welch, High speed data compression and decompression apparatus and method
MIT OpenCourseWare:
: Hidden categories:本题考查正方形的性质,四边相等,四个角都是直角,对角线相等,垂直且互相平分,且平分每一组对角.
从图中可看出全等的三角形至少有四对.故错误.的面积和的面积相等,故正方形的面积等于四边形面积的倍,故正确.是边长,故是正确的.因为,,故是正确的.故选.
本题考查了正方形的性质,全等三角形的判定和性质,以及勾股定理和相似三角形的判定和性质等.
3913@@3@@@@正方形的性质@@@@@@259@@Math@@Junior@@$259@@2@@@@四边形@@@@@@52@@Math@@Junior@@$52@@1@@@@图形的性质@@@@@@7@@Math@@Junior@@$7@@0@@@@初中数学@@@@@@-1@@Math@@Junior@@$3879@@3@@@@全等三角形的判定与性质@@@@@@258@@Math@@Junior@@$258@@2@@@@三角形@@@@@@52@@Math@@Junior@@$52@@1@@@@图形的性质@@@@@@7@@Math@@Junior@@$7@@0@@@@初中数学@@@@@@-1@@Math@@Junior@@$3892@@3@@@@勾股定理@@@@@@258@@Math@@Junior@@$258@@2@@@@三角形@@@@@@52@@Math@@Junior@@$52@@1@@@@图形的性质@@@@@@7@@Math@@Junior@@$7@@0@@@@初中数学@@@@@@-1@@Math@@Junior@@$3996@@3@@@@相似三角形的判定与性质@@@@@@266@@Math@@Junior@@$266@@2@@@@图形的相似@@@@@@53@@Math@@Junior@@$53@@1@@@@图形的变化@@@@@@7@@Math@@Junior@@$7@@0@@@@初中数学@@@@@@-1@@Math@@Junior@@
@@52@@7##@@52@@7##@@52@@7##@@53@@7
第二大题,第10小题
第一大题,第10小题
第一大题,第8小题
求解答 学习搜索引擎 | 如图,在正方形ABCD中,点O为对角线AC的中点,过点0作射线OM,ON分别交AB,BC于点E,F,且角EOF={{90}^{\circ }},BO,EF交于点P.则下列结论中:(1)图形中全等的三角形只有两对;(2)正方形ABCD的面积等于四边形OEBF面积的4倍;(3)BE+BF=\sqrt{2}0A;(4)A{{E}^{2}}+C{{F}^{2}}=20PoOB,正确的结论有(
)个.A、1B、2C、3D、4直线AB:y=-x-b分别与x、y轴交于A(6,0)、B两点,过点B的直线交x轴负半轴于C,且OB:OC=3:1.-中国学网-中国IT综合门户网站
> 直线AB:y=-x-b分别与x、y轴交于A(6,0)、B两点,过点B的直线交x轴负半轴于C,且OB:OC=3:1.
直线AB:y=-x-b分别与x、y轴交于A(6,0)、B两点,过点B的直线交x轴负半轴于C,且OB:OC=3:1.
转载 编辑:李强
为了帮助网友解决“直线AB:y=-x-b分别与x、y轴交于”相关的问题,中国学网通过互联网对“直线AB:y=-x-b分别与x、y轴交于”相关的解决方案进行了整理,用户详细问题包括:RT,我想知道:直线AB:y=-x-b分别与x、y轴交于A(6,0)、B两点,过点B的直线交x轴负半轴于C,且OB:OC=3:1.,具体解决方案如下:解决方案1:
√10=|6+k|*|k+4|*3&#47:OC=3;2*BF*d2=1&#47:将A(6:y=2x-k与AB,F;2*√10*|6+k|*|3k&#47,则C点坐标为C(-2;3-0)²2-6|/4欲使S△EBD=S△FBD;2, -12-3k)∴BF=√[(-6-k-0)&#178,0)∴BC直线方程为 3x-y+6=0直线EF:y=2x-k与BC;2+6|/√2点D到BF(即BC)的距离为d2=|3k/2*BE*d1=1&#47:y=3x+6可解得直线EF与BC的交点为F(-6-k,不构成三角形:1,0)点D到BE(即AB)的距离为d1=|k/2-6|/√2=|6+k|*|k-12|/3*|6+k|联立EF;2+6|&#47,0)代入AB直线方程y=-x-b可解得 b=-6∴AB直线方程为 x+y-6=0OB;4即 |6+k|*|k-12|=|6+k|*|k+4|*9k=-6时:y=-x+6可解得直线EF与AB的交点为E(2+k/3-6)²3)∴BE=√[(2+k/3;2*√2&#47, 4-k&#47,B重合;+(4-k/+(-12-3k-6)²]=√2&#47,则有|6+k|*|k-12|&#47,E;]=√10*|6+k|S△EBD=1/3*|6+k|*|k/12S△FBD=1/12=|6+k|*|k+4|*3/√10联立EF.4∴存在直线y=2x+2解:y=2x-k与x轴的交点为D(k&#47,故|k+6|≠0即有 |k-12|=|k+4|*9可解得 k=-2
通过对数据库的索引,我们还为您准备了:问:(1)求直线BC的解析式; (2)直线EF:y=2x-k(k≠0)交AB于E,交BC于点...答:解:(1)由已知:0=-6-b, ∴b=-6, ∴AB:y=-x+6. ∴B(0,6) ∴OB=6 ∵OB:OC=3:1, OC=OB/3=2 ∴C(-2,0) 设BC的解析式是Y=ax+c,代入,得 a=3,c=6 ∴BC:y=3x+6. 直线BC的解析式是:y=3x+6; 懒得打了,看图 ===========================================问:(2)直线EF:y=2x-k(k≠0)交AB于E,交BC于点F,交x轴于D,是否存在这...答:第一小题: 由于直线AB交X轴于点A(6,0),代入直线AB的函数解析式,得到b=6 所以,直线AB的解析式为y=-x-6 这样我们就可以得到点B的坐标为(0,6)---------你可以根据这两个点在坐标图中画出直线AB OB:OC=3:1 得到OC=2 又由于BC交X轴于负半轴,...===========================================问:EF:y=2x-k交AB于E,交BC于点F,交x轴于点D,是否存在这样的直线EF,是...答:很容易得出, B(0,6) BC: y=3x+6 AB: y=-x+6 直线y=2x-k与BC和AB联立,求出E (2+(k/3),4-(k/3)),F(-k-6,-3k-12) 令y=0, 得出D(k/2,0) 得出向量BD=(k/2,-6),EF=(8+(4k/3),16+(8k/3)) 因为S△EBD=S△FBD, 且两个三角形有一个公共的BD边, 那么E,F...===========================================问:直线AB:y=-x-b分别与x、y轴交于A(6,0)、B两点,过点B的直线交x轴负...答:(2)过E、F分别作EM⊥x轴,FN⊥x轴,则∠EMD=∠FND=90°. ∵S△EBD=S△FBD, ∴DE=DF. 又∵∠NDF=∠EDM, ∴△NFD≌△EDM, ∴FN=ME. 联立y=kx-ky=-x+6​得yE= 5kk+1, 联立y=kx-ky=3x+6​得yF= 9kk-3. ∵FN=-yF,ME=yE, ∴5kk+1= -9kk-3. ∵k≠0, ...===========================================问:直线AB:y=-x-b分别与x、y轴交于A(6,0)、B两点,过点B的直线交x轴负...答:(1)求直线BC的解析式。(2)直线EF:y=kx-k(k≠0)交AB于E,交BC于点F,交x轴于D,是否存在这样的直线EF,使得S△EBD=S△FBD?若存在,求出k的值。若不存在,说明理由。===========================================问:(1)求直线BC的解析式(2)在直线BC上是否存在点D,使得S△abd=S△abo?...答:AB:y=-x+6,BC:3x-y+6=0 S△ABO=1/2*6*6=18,A到BC距离d=|3*6-1*0+6|/√(9+1)=24/√10 ∴|BD|=3√10/4,设D(x0,3x0+6),则|BD|=√[(x0-0)²+(6-3x0-6)²]=3√10/4,解得x0=±3/4 ∴D(3/4,33/4)或D(-3/4,15/4) Q又是什麼东西,你字母搞错没有?===========================================问:(1)求直线BC的解析式(2)在直线BC上是否存在点D,使得S△abd=S△abo?...答:解:将A(6,0)代入y=-x-B,解得B=-6, 所以函数解析式为 y=-x-6 与y轴的交点B为(0,-6) OB=6 因为OB:OC=3:1 所以OC=2 所以C(-2,0) 设直线BC为: y=kx+b, 将B(0,-6);C(-2,0)代入得 -6=b, 0=-2k+b,解得b=-6,k=-3 所以直线BC的...===========================================问:直线AB:y=-x-b分别与x、y轴交于A (6,0)、B两点,过点B的直线交x轴负...答:第一小题: 由于直线AB交X轴于点A(6,0),代入直线AB的函数解析式,得到b=6 所以,直线AB的解析式为y=-x-6 这样我们就可以得到点B的坐标为(0,6)---------你可以根据这两个点在坐标图中画出直线AB OB:OC=3:1 得到OC=2 又由于BC交X轴于负半轴,...===========================================问:过D作DC⊥x轴(1)若四边型ODCB为平行四边形,试求b的值(2)在双曲线y=8...答:1): A(b, 0); B(0, -b). ODBC为平行四边形, DC=OB=b 即 Yd=b Xd=8/Yd=8/b =2b b^2=4, b=2 (b=-2 舍去)。 2):BD的斜率为1. OE的方程为: y=x xy=x^2=y^2=8---&x=y=2*根号2. E(2根号2, 2根号2); 3):=========================================== 有A算出B=-6 又因为OB:OC=3:1 c(-2.0)解析式可知===========================================直线AB:y=-x-b分别与x,y轴交于A(4,0), 0=-4-b b=-4 直线AB: y=-x+4 B(0,4) OB:OC=4:3. OC=3 所以C(-3,0) (1)直线BC的解析式 y=4x/3+4 (2) P1(-8,0) P2(3,0) P3(2,0)===========================================直线EF经过定点D(1,0)。 若存在这样的直线EF,使得三角形EBD和三角形FBD面积相等,则有ED=DF;这样点E和点F的纵坐标应该互为相反数。 把点E和点F的纵坐标分别用k...===========================================交X轴于点A(6,0),代入直线AB的函数解析式,得到b=6 所以,直线AB的解析式为y=-x-6 ... b为固定值,并不随点P(p,0)的改变而改变 这样直线QA:y=x-6的延长线交于Y轴的K点也...===========================================(1)求直线BC解析式 (2)直线EF:y=kx-k(k≠0)交AB于E交BC于点F交x轴于D否存直线EF使S△EBD=S△FBD若存求k值若存说明理由===========================================1.y=-x-b与x交于A(6,0),得:b=-6,即AB:y=-x+6;B坐标(0,6);2.设C(-c,0);3.由OB:OC=3:1,得6:c=3:1,即c=2,那么C(-2,0);3.BC斜率k=(0-6)/(-2-0)=3;4.BC方程:y=3(x+2)。===========================================解:1.直线y=x-b与直线y=2x+4交于x轴上同一点A y=x-b在y=0时,x=b y=2x+4在y=0时,x=-2,所以:b=-2 直线y=x+2与y轴的交点B的纵坐标为:yB=0+2=2 直线y=2x+4与y轴的交点C...===========================================A点坐标(-2,0),可求得b=-2。则B,C两点坐标分别为(0,4),(0,2),面积=(4-2)*2/2=2===========================================解:因为直线y=x-b与直线y=2x+4交于x轴上一点A所以y=0所以0=x-b,0=2x+4解之:x=-2,b=-2,所以A(-2,0)所以直线y=x+2,y=2x+4因为直线y=x+2,y=2x+4分别交y轴B,C两点所以当x=0...===========================================一次函数y=(2+1/2a)x-b图像与y轴交于(0,4),b=-4 与两坐标轴围成的面积为10,则10*2=绝对值X*4,x=5或x=-5 (2+1/2a)*5+4=0 a=-28/5 (2+1/2a)*(-5)+4=0 a=-12/5===========================================
本文欢迎转载,转载请注明:转载自中国学网: []
用户还关注
可能有帮助

我要回帖

更多关于 电竞OB 的文章

 

随机推荐