ࡱ> 685; bjbjUU >.??;#$0????????Tj???????????????????&&??0$?`?`?`?t????????????$????`????????? :Resumo Muitas das aplicaes desenvolvidas, especialmente no contexto do e-Science, nas diversas reas de conhecimento como engenharia, medicina e biologia, demandam ambientes de execuo de larga escala que possibilitem no s trabalhar com um grande conjunto de dados, mas tambm conseguir seus resultados em tempos viveis. Data centers so as principais fontes para suprir essa demanda, oferecendo supercomputadores compostos de servidores com cada vez mais recursos de processamento, memria e armazenamento. Porm, dado o alto custo para a aquisio, implantao e manuteno dos data centers, seus administradores tm procurado mecanismos para tornar sua utilizao mais eficientes. As estratgias mais clssicas concentravam-se em mecanismos que realizavam reserva dos recursos para cada job individualmente, tornando o job detentor dos servidores alocados at o trmino da sua execuo. O grande problema dessa abordagem a ineficincia na utilizao dos servidores e os relativamente longos tempos de respostas (turnaround times). O escalonamento de tarefas um dos principais problemas da computao, o qual pesquisadores tentam obter mtodos eficientes de escalonar grandes conjuntos de tarefas sobre nmeros significativos de recursos. Escalonamentos tradicionalmente concentram-se em otimizar o tempo total de execuo (makespan), mas recentemente outros objetivos tambm tm sido buscados para otimizao como, por exemplo: a vazo dos jobs; consumo energtico; e particularmente no contexto de computao em nuvem, o custo de aluguel dos recursos necessrios, a quantidade de atendimentos de SLA (Service Level Agreements) e prazos de execuo. Este trabalho prope uma estratgia alternativa para o escalonamento de um conjunto de aplicaes paralelas em recursos compartilhados. Ao invs de tentar alcanar o menor tempo de execuo, esse trabalho procura permitir que as prprias aplicaes se escalonem para conseguir finalizar suas execues em seus respectivos prazos de tempo (ou metas) determinados pelos seus usurios. A nova poltica leva em considerao que ambientes devem compartilhar seus recursos a fim de melhorar a utilizao, para que as aplicaes paralelas no tentem somente atingir seus objetivos, mas permitir que as outras consigam tambm. Usando o conceito de aplicaes autnomas, foi proposta uma heurstica de comportamento que controla a demanda da aplicao para recursos de um servidor em funo das demandas inferidas das aplicaes concorrentes. Experimentos realizados verificam a possibilidade de criar comportamentos altrustas em aplicaes paralelas sem ter qualquer comunicao explcita. Foi observado durante a execuo em vrios cenrios que a implementao proposta, baseada na sociologia de aplicaes paralelas, foi capaz de atender a maioria das metas definidas e apresentou resultados promissores para trabalhos futuros nesta abordagem nova de gerenciamento de recursos. Palavras-chave: Computao distribuda; Computao autnoma; Aplicaes paralelas; Escalonamento dinmica de tarefas; MPI; EasyGrid AMS. Abstract In the context of e-Science, many of the applications being developed in the various areas of expertise such as engineering, medicine and biology, for example, require large-scale execution environments that not only handle large data sets, but also provide the results in acceptable run times. Data centres are currently the favoured answer to meet these demands, offering clsuters composed of growing number of servers each with ever increasing processing, memory and storage capacities. However, given their high cost of acquisition, deployment and maintenance, administrators are constantly seeking efficient management mechanisms to make use the data centres more efficient. Most classic scheduling strategies focused on mechanisms that reserve resources in advance for each job individually, making the job the exclusive user of the allocated servers until the end of its execution. The major problem with this approach is the inefficient use of server capacity and the relatively long turnaround times. Task scheduling is a key performance issue in computing, for which research has focused intensely on trying to obtain efficient solutions to map large sets of tasks on significant numbers of resources. Algorithms have traditionally focused on optimizing the total execution time (makespan), but recently other objectives have also become the focus of optimization, for example, throughput of jobs; energy consumption; and particularly in the context of cloud computing, the rental cost of the required resources and satisfying Service Level Agreements (SLAs) and execution deadlines. This dissertation proposes an alternative strategy for scheduling a set of parallel applications on a collection of shared computing resources. Instead of trying to achieve the lowest execution time, this work seeks to allow the applications to schedule themselves in order to complete their executions within their respective time limits, defined previously by their users. This new strategy takes into account that environments should share their resources to achieve better utilization, and that parallel applications should try to not only achieve their goals, but allow others to succeed as well. Using the concept of autonomic applications, a heuristic has been proposed that controls the applications demand for the resources of a server in light of its needs and the inferred needs of competing applications. Experimental analysis has verified the possibility of creating altruistic behaviour in parallel applications without any explicit communication between them. The concurrent execution of a number of applications, in various scenarios, has shown that proposed implementation, based on the sociology of parallel applications, is able to permit these applications to meet the 9 majority of their target execution times. These promising results should motivate future work on further research into this new approach to resource management. Keywords: Distributed computing; Autonomic computing; Parallel applications; Dynamic task scheduling; MPI; EasyGrid AMS. +JST 4 5 H I T U X Z [ d f g m r y }     ! # , 9 : < G K W ] ^   = > @ X Y    7 A hzr6]h_0shzr6]h<|hzr6] h_0shzrhhzr6] hGEhzrhzrhhhzrKHh8Fhzr5CJKH\aJE z{     `gdX:gd^`gd^ $`a$gd^    7 9 A B H Ph @N^_}$%)39DJS{ hShzrhhzr6] hzr6]h<|hzr6] hGEhzrhzrQ{&'; ؿhX:hzr5\mH sH hX:hzrmH sH hzrmH sH "hX:hzr5CJ \aJ mH sH hzrhX:hzr5\&'% & ' ( ) * + , - . / 0 1 2 3 4 5 $a$gdX:gd^5 6 7 8 9 : ; gd^<P1h:p. A!"#$% Dpj 666666666vvvvvvvvv666666>6666666666666666666666666666666666666666666666666hH6666666666666666666666666666666666666666666666666666666666666666662 0@P`p2( 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p8XV~ OJPJQJ_HmHnHsHtHh`h ^Normal$dh*$`a$(CJKHOJPJQJ_HaJmHsHtH DA D 0Default Paragraph FontRiR 0 Table Normal4 l4a (k ( 0No List PK![Content_Types].xmlj0Eжr(΢Iw},-j4 wP-t#bΙ{UTU^hd}㨫)*1P' ^W0)T9<l#$yi};~@(Hu* Dנz/0ǰ $ X3aZ,D0j~3߶b~i>3\`?/[G\!-Rk.sԻ..a濭?PK!֧6 _rels/.relsj0 }Q%v/C/}(h"O = C?hv=Ʌ%[xp{۵_Pѣ<1H0ORBdJE4b$q_6LR7`0̞O,En7Lib/SeеPK!kytheme/theme/themeManager.xml M @}w7c(EbˮCAǠҟ7՛K Y, e.|,H,lxɴIsQ}#Ր ֵ+!,^$j=GW)E+& 8PK!Ptheme/theme/theme1.xmlYOo6w toc'vuر-MniP@I}úama[إ4:lЯGRX^6؊>$ !)O^rC$y@/yH*񄴽)޵߻UDb`}"qۋJחX^)I`nEp)liV[]1M<OP6r=zgbIguSebORD۫qu gZo~ٺlAplxpT0+[}`jzAV2Fi@qv֬5\|ʜ̭NleXdsjcs7f W+Ն7`g ȘJj|h(KD- dXiJ؇(x$( :;˹! I_TS 1?E??ZBΪmU/?~xY'y5g&΋/ɋ>GMGeD3Vq%'#q$8K)fw9:ĵ x}rxwr:\TZaG*y8IjbRc|XŻǿI u3KGnD1NIBs RuK>V.EL+M2#'fi ~V vl{u8zH *:(W☕ ~JTe\O*tHGHY}KNP*ݾ˦TѼ9/#A7qZ$*c?qUnwN%Oi4 =3ڗP 1Pm \\9Mؓ2aD];Yt\[x]}Wr|]g- eW )6-rCSj id DЇAΜIqbJ#x꺃 6k#ASh&ʌt(Q%p%m&]caSl=X\P1Mh9MVdDAaVB[݈fJíP|8 քAV^f Hn- "d>znNJ ة>b&2vKyϼD:,AGm\nziÙ.uχYC6OMf3or$5NHT[XF64T,ќM0E)`#5XY`פ;%1U٥m;R>QD DcpU'&LE/pm%]8firS4d 7y\`JnίI R3U~7+׸#m qBiDi*L69mY&iHE=(K&N!V.KeLDĕ{D vEꦚdeNƟe(MN9ߜR6&3(a/DUz<{ˊYȳV)9Z[4^n5!J?Q3eBoCM m<.vpIYfZY_p[=al-Y}Nc͙ŋ4vfavl'SA8|*u{-ߟ0%M07%<ҍPK! ѐ'theme/theme/_rels/themeManager.xml.relsM 0wooӺ&݈Э5 6?$Q ,.aic21h:qm@RN;d`o7gK(M&$R(.1r'JЊT8V"AȻHu}|$b{P8g/]QAsم(#L[PK-![Content_Types].xmlPK-!֧6 +_rels/.relsPK-!kytheme/theme/themeManager.xmlPK-!Ptheme/theme/theme1.xmlPK-! ѐ' theme/theme/_rels/themeManager.xml.relsPK] ;. {; 5 ; k7?"4X:S<GEK?E8Fzro{<|O$^h#xL,S_0s;=@;@@UnknownG*Ax Times New Roman5Symbol3" Arial7.@CalibriACambria Math"rJrJrq0$P;^!xxResumofabricioque@gmail.comHelioOh+'0   D P \hpxResumofabricioque@gmail.comNormal_WordconvHelio2Microsoft Office Outlook@G@޾&&@޾&&r՜.+,0 hp|   Resumo Title  !"#$&'()*+,./012347Root Entry F=&&91Table`WordDocument>.SummaryInformation(%DocumentSummaryInformation8-CompObjy  F'Microsoft Office Word 97-2003 Document MSWordDocWord.Document.89q  F#Documento do Microsoft Office Word MSWordDocWord.Document.89q