Find Jobs
Hire Freelancers

Create a web spider as linux daemon

$250-750 USD

Awarded
Posted over 10 years ago

$250-750 USD

Paid on delivery
A linux deamon should spider intranet websites and extract some data. The base urls of the intranet servers are given as ([login to view URL], [login to view URL] ... [login to view URL]). A C++ application (deamon) should be built with the following interface which allows to manage/create a list of pages (urls): - add a host to be spidered (going through all pages on this site, creating a list of the pages of a site) - add a single url to be spidered (adding it to the list of pages of a site) - remove a host (not to be spidered in future, deleting all related xapian data and lists of pages) - remove a single url, all of the related xapian data and removing it from the list of pages to be spidered - allow to set a list of url parameters that should be ignored (session ids for example) - specify a time interval after wich an already spidered url has to be spidered again - specify a time interval for following calls on a site-IP, preventing to "overload" it - specify a max_depth parameter, defining how deep the site should be crawled - for each site host, an according process should do this job. e.g. 10 site-IPs to spider -> 10 processes The interface should allow to define: Spider all urls from [login to view URL], all from [login to view URL] except [login to view URL] plus spider only [login to view URL] The processes which spider through the list of pages should... - get the content of each url, splitting it into text (content without html tags) , encoding (charset), title, canonical url and description (from meta info), current date+time*. - give this data to a different application through a function call. The spider should not come into infinite loops, therefore it has to check, if the raw site content of an url is identical of an url with some different parameter. If possible, it should use the canonical tag for this. To determine, if a site has already been spidered, the according process can "ask" (function call) if the url has already been spidered (based on the data extracted with *), and if yes, if it was more than max_interval days ago. Yes: spider again and get data, no: continue with next url. Starting points: - [login to view URL] - [login to view URL] - [login to view URL]
Project ID: 4979819

About the project

5 proposals
Remote project
Active 11 yrs ago

Looking to make some money?

Benefits of bidding on Freelancer

Set your budget and timeframe
Get paid for your work
Outline your proposal
It's free to sign up and bid on jobs

About the client

Flag of SWITZERLAND
Eichberg, Switzerland
5.0
3
Member since Sep 25, 2011

Client Verification

Thanks! We’ve emailed you a link to claim your free credit.
Something went wrong while sending your email. Please try again.
Registered Users Total Jobs Posted
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Loading preview
Permission granted for Geolocation.
Your login session has expired and you have been logged out. Please log in again.