Build a web scraper for a specific website (Perl or Ruby)
$30-250 USD
착불
I need to create a web scraper to gather a list of events listed on a single specific website. The scraper will convert events found on a website's online calendar into files containing the detail of those events.
At this URL you will find a "calendar" listing upcoming events:
[login to view URL]
The scraper must be written in Perl or Ruby. The program should take two command line arguments.
* The first argument should be an integer number of days into the future to scan. For example, if this argument were zero, it would indicate that only today's events should be scanned or if this argument were 2 it would indicate that all events occurring today, tomorrow, and the day after should be scanned.
* The second command line argument should be the name of a local directory where output files are placed.
Each "event" found on the calendar should cause the creation of one output file. The output file name should consist of the date on which the event "occurs" (in the format YYYY-MM-DD) as well as any additional characters to make the output filename unique and the file suffix should be ".yaml". An example of a valid filename might be "[login to view URL]" to indicate the second event occurring on February 17, 2019. The output file format should be YAML. An example of a valid output file is attached.
Each YAML file is built up from scraping the event detail page such as this one: [login to view URL]
In the attached example, the scraped data elements are circled in red. Please note that the detail page shown in the example actually has multiple events on it and should generate multiple output YAML files (one for each date/time).
For example, suppose that the script you create is called [login to view URL] and you invoke it with this command line on a Linux server:
./[login to view URL] 2 /tmp/scrapefiles
It would generate perhaps 6 yaml files in the /tmp/scrapefiles directory.
프로젝트 ID: #18803855
프로젝트 소개
이 일자리에 대한 프리랜서 17 명의 평균 입찰가: $161
Hi, the project description is clear to me. I'm ready to come out a Perl script for you. _______________________________________
Hello, I will create the web scraper in Ruby, please send me a message so we can discuss more, i have 8+ years of experience, i have done similar job for many web applications. Thanks!
Hi there! I see you are looking for a Perl expert who can build a web-scrapper for you. Here I am! I can offer you 10+ years of working experience and a wide range of projects completed successfully by me. Here are so 기타
Hello, I have gone through the JD, i can work on the scraper, i have done similar job before, i have 9+ years of experience in ROR, please send me a message so we can discuss further. Thank you!
Hi, I have gone through your requirement to scrape lots of websites. I am EXPERT in building scraping tools /scripts. Hence, I can SURELY work on your project. I am having 4 YEARS of EXPERIENCE in developing PHP-PYTHON 기타
Hi, I am Ruby developer and DevOps. Skills: - Ruby - System Admin - Docker, Virtualbox , Nanobox, Kubernetes - Hosting & Maintaining any platform - Git, Bitbucket - MySQL , PostgreSQL - Web Scrap 기타
Hi , I can achieve this using perl. I have 7byears of experience in perl. I would also like to know the platform you are using to run the script. let me know your preferences. thanks,
Hi I am experienced in web scrapping using Ruby. Using gems Nokogiri to scrap static HTML pages and Watir for JavaScript based web pages (also using watir for browser automation tasks). If you are interested please co 기타
Certifications & Achievements • Certified ScrumMaster® • Certified PRINCE2® Project Manager • ExStartup Founder with reasonable Exit Product/Project Management Experience • Agile Coach for cultivating Agile Cult 기타
Hello, I can do what you are looking for. I am using ruby for web scraping (Watir and/or Mechanize gems).