Adam H. Villa and Elizabeth Varki
Large file transfers, Data Retrieval, Academic Networks
Cloud/Grid computing is envisioned to be a predominant computing model of the future. The movement of files between cloud and client is intrinsic to this model. With the creation of ever expanding data sets, the sizes of files have increased dramatically. Consequently, terabyte file transfers are expected to be the "next big" Internet application. This application is different from other Internet applications in that it requires extensive bandwidth, orders of magnitude larger than the bandwidth requirements of existing applications. It is essential to determine whether or not existing network infrastructures can handle the augmented workload that terabyte transfers would create. This is particularly critical for academic campus networks that are already under strain from high user demand. The paper evaluates the system level challenges of incorporating terabyte transfers into an existing campus network. The evaluation finds that large file transfers can be handled by the current campus network without making major changes to the infrastructure. It is vital to employ a system level service that schedules and monitors the terabyte transfers on users' behalf. By removing control from users, the service is able to leverage low demand periods and dynamically re-purpose unused bandwidth.
Important Links:
Go Back