2021上海软件水平考试考试真题卷(2).docx
2021上海软件水平考试考试真题卷(2)本卷共分为1大题50小题,作答时间为180分钟,总分100分,60分及格。一、单项选择题(共50题,每题2分。每题的备选项中,只有一个最符合题意) 1.The grid computing is a new (66) technology connecting the distributed and (67) resources to the high-speed network and integrating a super-computer of processing capacity. The significance and architecture of the grid computing is explained. Several kernel technology such as OGSI, resource management, task management, task scheduling, high rate communication and security are described. Aiming at the particularity of the grid computing environment a mechanism similar to the technology of the search engine is designed to registry, discovery and (68) the resources in the grid. The whole model of the resource management is built by connecting task manager in the local resource management system to others with P2P model. The task may migrate among the task managers in order to (69) the load. The task users summit may be executed in relatively tight resource set, which will not only decrease the total communication overheads of the whole task but also (70) the performance of the system.(67)处填()。AisomorphicBdifferentCheterogeneousDalien2.The grid computing is a new (66) technology connecting the distributed and (67) resources to the high-speed network and integrating a super-computer of processing capacity. The significance and architecture of the grid computing is explained. Several kernel technology such as OGSI, resource management, task management, task scheduling, high rate communication and security are described. Aiming at the particularity of the grid computing environment a mechanism similar to the technology of the search engine is designed to registry, discovery and (68) the resources in the grid. The whole model of the resource management is built by connecting task manager in the local resource management system to others with P2P model. The task may migrate among the task managers in order to (69) the load. The task users summit may be executed in relatively tight resource set, which will not only decrease the total communication overheads of the whole task but also (70) the performance of the system.(68)处填()。Asearch forBfindClook forDlocate3.WWW is popular for its multimedia transmission and friendly (71) . Although the speed of network has been improved considerably in recent years, the rapid (72) of using the Internet, the inherited character of delay in the network and the Request/Response working mode of WWW still make the Internet traffic very (73) and give no guarantee on the Quality of Service. Because HTTP has no states, the web server cannot know the users’ demand and the users’ requests cannot be predicted Taking advantage of a cache mechanism and the time locality of WWW accesses, the browser can preserve the documents ever accessed in the local machine. By this means, for the documents in the local cache, the browser does not need to send the requests to the remote server or to receive the whole responses from the remote one Pre-fetching uses the space locality of accesses First, the users’ access requests are predicted according to the users’ current request. Secondly, the expected pages are fetched into the local cache when the user is brow sing the current page. Finally, the users can access these pages downloaded from the local cache. And this can reduce the access delay to some degrees. Pre-fetching is one kind of active caches that can cache the pages which are still not requested by the user. The application of pre-fetching technology in the web can greatly reduce the waiting time after users have sent their requests. This paper brings forward an intelligent technique of web pre-fetching, which can speed up fetching web pages. In this technique, we use a simplified WWW data model to represent the data in the cache of web browser to mine the association rules. We store these rules in a knowledge base so as to (74) the user’s actions. In the client sides, the agents are responsible for mining the users’ interest and pre-fetching the web pages, which are based on the interest association repository. Therefore it is (75) for the users to speed up the browsing.(73)处填()。AquickBrapidCcomplicatedDslow4.The grid computing is a new (66) technology connecting the distributed and (67) resources to the high-speed network and integrating a super-computer of processing capacity. The significance and architecture of the grid computing is explained. Several kernel technology such as OGSI, resource management, task management, task scheduling, high rate communication and security are described. Aiming at the particularity of the grid computing environment a mechanism similar to the technology of the search engine is designed to registry, discovery and (68) the resources in the grid. The whole model of the resource management is built by connecting task manager in the local resource management system to others with P2P model. The task may migrate among the task managers in order to (69) the load. The task users summit may be executed in relatively tight resource set, which will not only decrease the total communication overheads of the whole task but also (70) the performance of the system.(69)处填()。AdecreaseBbalanceCenhanceDkeep5.WWW is popular for its multimedia transmission and friendly (71) . Although the speed of network has been improved considerably in recent years, the rapid (72) of using the Internet, the inherited character of delay in the network and the Request/Response working mode of WWW still make the Internet traffic very (73) and give no guarantee on the Quality of Service. Because HTTP has no states, the web server cannot know the users’ demand and the users’ requests cannot be predicted Taking advantage of a cache mechanism and the time locality of WWW accesses, the browser can preserve the documents ever accessed in the local machine. By this means, for the documents in the local cache, the browser does not need to send the requests to the remote server or to receive the whole responses from the remote one Pre-fetching uses the space locality of accesses First, the users’ access requests are predicted according to the users’ current request. Secondly, the expected pages are fetched into the local cache when the user is brow sing the current page. Finally, the users can access these pages downloaded from the local cache. And this can reduce the access delay to some degrees. Pre-fetching is one kind of active caches that can cache the pages which are still not requested by the user. The application of pre-fetching technology in the web can greatly reduce the waiting time after users have sent their requests. This paper brings forward an intelligent technique of web pre-fetching, which can speed up fetching web pages. In this technique, we use a simplified WWW data model to represent the data in the cache of web browser to mine the association rules. We store these rules in a knowledge base so as to (74) the user’s actions. In the client sides, the agents are responsible for mining the users’ interest and pre-fetching the web pages, which are based on the interest association repository. Therefore it is (75) for the users to speed up the browsing.(74)处填()。AobtainBgetCpredictDupdate6.WWW is popular for its multimedia transmission and friendly (71) . Although the speed of network has been improved considerably in recent years, the rapid (72) of using the Internet, the inherited character of delay in the network and the Request/Response working mode of WWW still make the Internet traffic very (73) and give no guarantee on the Quality of Service. Because HTTP has no states, the web server cannot know the users’ demand and the users’ requests cannot be predicted Taking advantage of a cache mechanism and the time locality of WWW accesses, the browser can preserve the documents ever accessed in the local machine. By this means, for the documents in the local cache, the browser does not need to send the requests to the remote server or to receive the whole responses from the remote one Pre-fetching uses the space locality of accesses First, the users’ access requests are predicted according to the users’ current request. Secondly, the expected pages are fetched into the local cache when the user is brow sing the current page. Finally, the users can access these pages downloaded from the local cache. And this can reduce the access delay to some degrees. Pre-fetching is one kind of active caches that can cache the pages which are still not requested by the user. The application of pre-fetching technology in the web can greatly reduce the waiting time after users have sent their requests. This paper brings forward an intelligent technique of web pre-fetching, which can speed up fetching web pages. In this technique, we use a simplified WWW data model to represent the data in the cache of web browser to mine the association rules. We store these rules in a knowledge base so as to (74) the user’s actions. In the client sides, the agents are responsible for mining the users’ interest and pre-fetching the web pages, which are based on the interest association repository. Therefore it is (75) for the users to speed up the browsing.(75)处填()。AtransparentBclearCfuzzyDchangeable7.The grid computing is a new (66) technology connecting the distributed and (67) resources to the high-speed network and integrating a super-computer of processing capacity. The significance and architecture of the grid computing is explained. Several kernel technology such as OGSI, resource management, task management, task scheduling, high rate communication and security are described. Aiming at the particularity of the grid computing environment a mechanism similar to the technology of the search engine is designed to registry, discovery and (68) the resources in the grid. The whole model of the resource management is built by connecting task manager in the local resource management system to others with P2P model. The task may migrate among the task managers in order to (69) the load. The task users summit may be executed in relatively tight resource set, which will not only decrease the total communication overheads of the whole task but also (70) the performance of the system.(70)处填()。AdecreaseBenhanceCkeepDbalance8.信息工程的基础是信息战略规划,规划的起点是将 (5) 和企业的信息需求转换成信息系统目标,实施信息系统工程是要为企业建立起具有稳定数据型的数据处理中心,以满足各级管理人员关于信息的需求,它坚持以 (6) 为信息处理的中心。A事务处理B现行人工和电算化混合的信息系统C企业战略目标D第一把手要求 9.以下关于信息库(repository)的叙述中,最恰当的是 (3) ; (4) 不是信息库所包含的内容。A存储一个或多个信息系统或项目的所有文档、知识和产品的地方B存储支持信息系统开发的软件构件的地方C存储软件维护过程中需要的各种信息的地方D存储用于进行逆向工程的源码分析工具及其分析结果的地方 10.信息战略规划报告应由3个主要部分组成:摘要、规划和附录。其中摘要涉及的主题包括信息战略规划所涉及的范围、企业的业务目标和战略重点、信息技术对企业业务的影响、对现有信息环境的评价、推荐的系统战略、推荐的技术战略、推荐的组织战略、推荐的行动计划等,其中系统战略是关于 (10) 和 (11) 的总结。A技术结构规划B整体网络规划C数据库结构规划D信息结构规划 11.“企业系统规划方法”和“信息工程”都推荐建立表示数据类(主题数据库)和过程之间关系的CU矩阵M。其中若第i号过程产生第k号数据类,则Mik=C;若第i号过程使用第k号数据类,则Mjk=U。矩阵M按照一定的规则进行调整后,可以给出划分系统的子系统方案,并可确定每个子系统相关的 (7) 和 (8) ;同时也可了解子系统之间的 (9) 。A关系数据库B层次数据库C网状数据库D共享数据库 12.假设信源是由g个离散符号S1,S2,Si,Sq所组成的符号集合,集合中的每个符号是独立的,其中任一个符号Si出现的概率为P(Si),并满足P(Si)=1。那么符号Si含有的信息量:I(Si)等于 (17) ,单位是 (18) 。A-logqP(Si)BlogqP(Si)C-log2P(Si)Dlog2P(Si) 13.信息工程的基础是信息战略规划,规划的起点是将 (5) 和企业的信息需求转换成信息系统目标,实施信息系统工程是要为企业建立起具有稳定数据型的数据处理中心,以满足各级管理人员关于信息的需求,它坚持以 (6) 为信息处理的中心。A数据B过程C功能D应用 14.“企业系统规划方法”和“信息工程”都推荐建立表示数据类(主题数据库)和过程之间关系的CU矩阵M。其中若第i号过程产生第k号数据类,则Mik=C;若第i号过程使用第k号数据类,则Mjk=U。矩阵M按照一定的规则进行调整后,可以给出划分系统的子系统方案,并可确定每个子系统相关的 (7) 和 (8) ;同时也可了解子系统之间的 (9) 。A关系数据库B网状数据库C专业(私有)数据库D子集数据库 15.假设信源是由g个离散符号S1,S2,Si,Sq所组成的符号集合,集合中的每个符号是独立的,其中任一个符号Si出现的概率为P(Si),并满足P(Si)=1。那么符号Si含有的信息量:I(Si)等于 (17) ,单位是 (18) 。A比特B信息熵CdbD无单位 16.以下关于信息库(repository)的叙述中,最恰当的是 (3) ; (4) 不是信息库所包含的内容。A网络目录BCASE工具C外部网接口D打印的文档 17.信息战略规划报告应由3个主要部分组成:摘要、规划和附录。其中摘要涉及的主题包括信息战略规划所涉及的范围、企业的业务目标和战略重点、信息技术对企业业务的影响、对现有信息环境的评价、推荐的系统战略、推荐的技术战略、推荐的组织战略、推荐的行动计划等,其中系统战略是关于 (10) 和 (11) 的总结。A业务系统结构规划B机构结构规划C过程结构规划D系统发展规划 18.“企业系统规划方法”和“信息工程”都推荐建立表示数据类(主题数据库)和过程之间关系的CU矩阵M。其中若第i号过程产生第k号数据类,则Mik=C;若第i号过程使用第k号数据类,则Mjk=U。矩阵M按照一定的规则进行调整后,可以给出划分系统的子系统方案,并可确定每个子系统相关的 (7) 和 (8) ;同时也可了解子系统之间的 (9) 。A过程引用B功能关系C数据存储D数据通信 19.阅读以下说明,根据要求回答下面问题说明某公司制作了一个电子商务网站,下图是网站搜索部分的页面,用户至少需要填写关键字或者类别中的一项,才能够进行搜索,否则弹出提示框。第二个图是位于发布目录c:addq下的member.asp文件运行后的界面。注册用户可以通过它登录到网站,如果是未注册用户,可以单击立即注册链接打开位于本机目录c:addq下的register.asp文件进行注册。以下技术组合中,()不能开发出动态网页。AHTML+JSPBHTML+XMLCXML+JSPDXML+ASP20.下列对Http:/www.cc/org/welcome.html解不正确的是_。Ahttp是URLBhttp:/www.cc/org/welcome.html是对welcome.html进行寻址Cwww.cc.org是服务主机名Dwelcome.html是网页文件名21.假设有一个局域网,管理站每15分钟轮询被管理设备一次,一次查询访问需要的时间是200ms,则管理站最多可支持_个网络设备。A400B4000C4500D500022.在以下机房环境的描述中,错误的是_。A机房必须使用防静电地板B机房的装修必须采用防火材料C避免阳光直射到设备上,以控制机房内的温度D为缩短信号线的长度从而避免信号衰减,设备之间的空间要适当23.三层交换技术利用_进行交换。AIP地址BMAC地址C端口号D应用协议24.代理服务器是一种服务器软件,它的功能不包括_。A对用户进行分级管理B增加Cache,提高访问速度C节省IP地址开销D能实现入侵检测25.两个公司希望通过Internet进行安全通信,保证从信息源到目的地之间的数据传输以密文形式出现,而且公司不希望由于在中间节点使用特殊的安全单元而增加开支,最合适的加密方式是_,使用的会话密钥算法应该是_。A链路加密B节点加密C端端加密D混合加密26.一个局域网中某台主机的IP地址为176.68.160.12,使用22位作为网络地址,那么该局域网的子网掩码为_,最多可以连接的主机数为_。A255.255.255.0B255.255.248.0C255.255.252.0D255.255.0.027.背对背布置的机柜或机架背面之间的距离不应小于_m。A1B2.6C1.5D11228.数据链路层、网络层、传输层分别对应的网络连接设备是_。A路由器、网桥、网关B路由器、网关、网桥C网桥、路由器、网关D网关、路由器、网桥29.网络延时测试是指测试网络系统在负载条件下转发数据包所需要的时间。对于直通设备,延时是指_的时间间隔。A从输入帧的最后一个比特到达输入端口的时刻到输出帧的第一个比特出现在输出端口上的时刻B从输入帧的第一个比特到达输入端口的时刻到输出帧的第一个比特出现在输出端口上的时刻C从输入帧的第一个比特到达输入端口的时刻到输出帧的最后一个比特出现在输出端口上的时刻D从输入帧的最后一个比特到达输入端口的时刻到输出帧的最后一个比特出现在输出端口上的时刻30.两个公司希望通过Internet进行安全通信,保证从信息源到目的地之间的数据传输以密文形式出现,而且公司不希望由于在中间节点使用特殊的安全单元而增加开支,最合适的加密方式是_,使用的会话密钥算法应该是_。ARSABRC-5CMD5DECC31.当网络出现连接故障时,一般应首先检查_。A系统病毒B路由配置C物理连通性D主机故障32.一个局域网中某台主机的IP地址为176.68.160.12,使用22位作为网络地址,那么该局域网的子网掩码为_,最多可以连接的主机数为_。A254B512C1022D102433.下列关于客户朋艮务器网络操作系统的说法中,错误的是_。A一个局域网上至少有一台服务器,专为网络提供共享资源和服务B现行的本类操作系统包括UNIX、Linux等服务器版C相对于支持远程终端一主机模式的操作系统更便于使用D可使任一台计算机的资源都被网络上其他计算机共享34.在OSI七层结构模型中,处于数据链路层与传输层之间的是_。A物理层B网络层C会话层D表示层35._IP地址标识的主机数量最多。AD类BC类CB类DA类36.符合URL格式的Web地址是_。Ahttp/Bhttp:CDhttp:/37.关于综合布线隐蔽工程的实施,以下说法不正确的是_。A线槽的所有非导电部分的铁件均应相互连接和跨接,使之成为一个连接的导体B线槽内布放的缆线应平直,要有冗余C在建筑物中预埋线槽可为不同尺寸,按一层或两层设置,应至少预埋2根以上D线槽宣采用绝缘塑料管引入分线盒内38.下列关于综合布线系统设计的说法中,错误的是_。A所选用的配