Whatever your project size is, we will handle it well with all the standards fulfilled! We are here to give 100% satisfaction.
For job seekers, please visit our Career Page or send your resume to hr@actowizsolutions.com
Data scraping has become a vital tool for individuals and businesses in the data-driven world today, where the capability of collecting and analyzing data is vital for your success. It allows the scraping of essential data from websites, offering meaningful insights, well-informed decision-making capabilities, and a competitive edge.
In this blog, we'll understand how to extract product details from Costco with Python. Our emphasis will be on the "Electronics" category, having a detailed emphasis on the "Audio/Video" subcategory. We aim to scrape critical features like the product’s name, color, brand, connection type, item ID, price, model, categories, and description for every electronic device.
From this product category, the following features are scraped:
Before we go through the code, we'll have to install some dependencies and libraries. We'll use Python for scraping and two well-known data scraping libraries: Selenium and Beautiful Soup. BeautifulSoup helps us parse XML and HTML documents, whereas Selenium automates web browsers for scraping and testing objectives.
We'll review the website structure when we install the libraries to identify the elements needed to scrape. It will involve studying the HTML code for a website and recognizing the particular attributes and tags with the information we're involved in.
With data in hand, we'll start writing Python code to extract the website.
We'll utilize Beautiful Soup for scraping data and Selenium for automating the browser actions required to extract the website. When we get the script, we'll run that and save the data in the CSV file for easy analysis.
Pandas is the library to do data manipulation & analysis. You can save and manipulate the extracted data from a website. We have utilized ‘pandas’ to convert data from the dictionary format into a DataFrame format that is more appropriate for data analysis and manipulation and to save DataFrame in the CSV format to make it easier to open and utilize in other software.
Another library is lxml for processing HTML and XML documents. It is utilized to parse XML or HTML content of a webpage. Here, we have used ‘lxml’ having ‘et’ to search and navigate the document's tree-like structure of HTML in which ‘et’ means Element Tree, a module in the ‘lxml’ library that offers an easy and effective way of working with HTML and XML documents.
BeautifulSoup is the library that makes extracting data from web pages more accessible. It helps you in parsing a webpage's XML or HTML content and scrapes the data you're involved with. A BeautifulSoup library is used here for parsing HTML content attained from a webpage.
Selenium is the library that helps you automate web browsers. You can use it to automate navigating and interacting with a webpage, like filling out forms or clicking buttons.
Webdriver is the package utilized by Selenium for interacting with web browsers. This helps you to control a browser and implement JavaScript commands. A Selenium library having web driver modules is used to automate interaction with the webpage by making an example of web drivers and navigating to a particular URL; this helps to get a source code of web pages that can be analyzed and parsed.
Creating an example of the web driver is among the most vial things you'll have to do while utilizing Selenium. A web driver is a class that interacts with any web browser like Firefox, Chrome, or Edge. In the given code snippet, we have created an example of a Chrome web driver by utilizing webdriver.Chrome(). The line of code permits us to control a Chrome browser and interrelate with webpages like any user would.
Using the power of web drivers and Selenium, you can solve the full potential of data scraping and automate data collection procedures like a professional! With a web driver, we could navigate various pages, interrelate with page elements, complete forms, click on buttons, and scrape the required information. We could automate tasks and collect data more effectively using this powerful tool.
Now that we have a basic understanding of web scraping and the tools we use, we can dive into Let’s take a close look at the different functions we've created for a web scraping procedure. Creating functions helps reusability, code organization, and maintainability. It makes that easy to understand, update, debug, and codebase.
We'll clarify the objective of every function described and how it backs to the overall procedure.
A function named extract_content is made that takes one argument, URL, or uses Selenium to navigate to the URL, retrieve a page source, and parse that in the BeautifulSoup object with lxml parser is get passed into et.HTML() and convert to the Element Tree object. We could utilize a returned dom object to navigate and search an HTML document's tree-like structure and scrape information from a page.
This function utilizes a find_element() technique with By.XPATH for locating the “'Audio/Video” category link through the Costco electronics site and click() technique to navigate the page. This function helps us navigate the particular link on a website by clicking on that and scraping the page's content.
The xpath() technique of a dom object is utilized to get all the elements which match the detailed xpath expression. Now, the xpath helps to choose all “href” attributes of “a” elements which are successors of elements with having class "categoryclist_v2". Upon navigating an Audio/Video category, the function scrapes links of 4 displayed subcategories, permitting more scraping on particular pages.
With four subcategory links attained, we will extract all product links under these categories.
This function utilizes category_links() with extract_content() functions formerly defined to steer every subgroup page and scrape links of all products available under every subgroup. The function utilizes the xpath() technique of a content object for selecting all product links through given xpath expressions that choose all “href” attributes of “a” elements, which are successors of elements having automation-id "productList" and “href” characteristic ends with the ".html."
With links to products attained, we will extract every product's required features. A function utilizes a try-except block for handling any errors which might occur when scraping the features.
Inside a try block, a function utilizes a dom object's xpath() technique for selecting text of an element with a class called "product-title." If a product’s name is not accessible, a function assigns a value called "Product name is not available" to a 'product_name' support within a dataframe “data” at the place of the current product.
The function utilizes a dom object's xpath() technique for selecting a text of elements with an itemprop "brand." If a brand name is not accessible, a function assigns a value "Brand is not available" to a column “brand.”
The function utilizes a dom object's xpath() technique. In case the pricing is not accessible, the function allocates a value named "Price is not available" to a column “price.”
This function utilizes an xpath() technique of a dom object for selecting the text of an element with an id "item-no.” If a product id is not accessible, a function assigns a value named "Item Id is not available" to a column named “item_id.”
The function utilizes a dom object's xpath() technique for selecting the text of an element with automation-id called "productDetailsOutput." If a product description is not accessible, a function assigns a value called "Description is not available" to a “description” column.
This function utilizes a dom object's xpath() technique for selecting the text of an element with an id called "model-no." If a product model is not accessible, a function assigns a value called "Model is not available" to a “model” column.
The function utilizes the xpath() technique of a dom object for selecting the text of the initial div element following the sibling of an element with the text called "Connection Type." If a product connection type is not accessible, a function assigns a value called "Connection type is not available" to a 'connection_type' column.
The function utilizes the xpath() technique of a dom object for selecting the text of the 10th element having item prop called "name." In case a product category is not accessible, the function allocates the value called "Category is not available" to a 'category' column.
This function utilizes the xpath() technique to choose the text of the initial div element, the subsequent sibling of an element containing the text "Color." If a product color isn’t accessible, the function allocates the "Color is not available" value to a 'color' column.
With the accomplishment of defining necessary functions, we would now start the scraping procedure by successively calling all previously made functions to regain the wanted data.
The initial step is navigating to Costco's electronic category page utilizing a webdriver and detailed URL. Then, we will utilize the click_url() function for clicking on the Audio or Video category to scrape the HTML content of a page.
To save the extracted data, we would create the dictionary having required columns like 'product_url,' 'brand,' 'item_id,' 'color,' 'product_name,' 'price,' 'model,' 'category,' 'connection_type,' 'description.' Then we will make a dataframe with this dictionary, called 'data,' to store all the extracted data.
The script here calls a product_links(url_content) function that scrapes links of products available under 4 subcategories of an Audio or a Video category. All these links are added to a 'product_url' column in dataframe 'data.'
The code here iterates via every product in a 'data' dataframe, scraping product URLs from a 'product_url' column and using the extract_content() function for retrieving HTML content about a product page. Then, it calls formerly defined functions for scraping particular features like brand, model, price, color, connection type, category, description, item id, product name, etc., and assigns values to respective columns of a dataframe at detailed index, efficiently scraping all needed data for every product.
Using this last line of code, a dataframe 'data' having all the extracted data for every product gets exported to the CSV file called 'costco_data.csv.' It allows easy access to the manipulation of extracted data for more use or analysis.
By mastering all the fundamentals of web extraction, you can reveal a world of essential data that you can use for an extensive range of applications, including market research, data analysis, and more. With the capability to scrape and analyze data from a website, the opportunities are endless!
We believe this blog has given you a strong foundation of web scraping methods and inspired you to explore many possibilities which web scraping needs to provide. Therefore, what do you want? Start searching and see which insights you can discover with the influence of web scraping.
Ready to experience the power of data scraping for your business? Contact Actowiz Solutions now! You can also reach us for your mobile app scraping and web scraping service requirements.
Learn effective techniques to Scrape Google Maps POI Data safely, avoid IP blocks, and gather accurate location-based insights for business or research needs.
Learn how to build a scalable Amazon web crawler using Python in 2025. Discover techniques, tools, and best practices for effective product data extraction.
Actowiz Solutions' report unveils 2024 Black Friday grocery discounts, highlighting key pricing trends and insights to help businesses & shoppers save smarter.
This report explores women's fashion trends and pricing strategies in luxury clothing by analyzing data extracted from Gucci's website.
Leverage tyre pricing and market intelligence to gain a competitive edge, optimize strategies, and drive growth in the global tire industry.
Explore how data scraping optimizes ferry schedules and cruise prices, providing actionable insights for businesses to enhance offerings and pricing strategies.
Crumbl is growing sweeter with every bite! Check out thier recently opened locations and see how they are bringing their famous cookies closer to you with our web scraping services. Have you visited one yet
Web scraping enables businesses to access and analyze detailed product specifications from Costco, including prices, descriptions, availability, and reviews. By leveraging this data, companies can gain insights into customer preferences, monitor competitor pricing, and optimize their product offerings for better market performance.