Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 2.0 years
0 - 0 Lacs
Mohali, Punjab
On-site
Male applicants are preferred We are looking for an enthusiastic and proactive Python Developer to join our development team. In this role, you’ll help build and maintain backend services, work on API integrations, and collaborate closely with frontend developers to ensure seamless functionality between backend and ReactJS applications. You will have the opportunity to learn from experienced developers and gain hands-on experience in developing scalable solutions that drive real-world impact. Experience Required: 2-3 Years Mode of Work: On-Site Only (Mohali, Punjab) Mode of Interview : Face to Face( On-Site) Contact for Queries: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Note: This number will be unavailable on weekends and public holidays. Key Responsibilities: Backend Development: Assist in the development of clean, efficient, and scalable Python applications to meet business needs. API Integration: Support the creation, management, and optimization of RESTful APIs to connect backend and frontend components. Collaboration: Work closely with frontend developers to integrate backend services into ReactJS applications, ensuring smooth data flow and functionality. Testing and Debugging: Help with debugging, troubleshooting, and optimizing applications for performance and reliability. Code Quality: Write readable, maintainable, and well-documented code while following best practices. Participate in code reviews to maintain high coding standards. Learning and Development: Continuously enhance your skills by learning new technologies and methodologies, contributing ideas to improve development processes. Required Skills and Experience Problem Solving: Strong analytical skills with an ability to identify and resolve issues effectively. Previous working experience on LLM and AI Agents is a Plus. Teamwork: Ability to communicate clearly and collaborate well with cross-functional teams. Programming Languages: Python (Core and Advanced) , JavaScript , HTML, CSS Frameworks: Django , Flask , FastAPI , LangChain Libraries & Tools: Pandas, NumPy , Selenium, Scrapy, BeautifulSoup , Git, Postman, OpenAI API, REST APIs Databases : MySQL , PostgreSQL , SQLite Cloud & Deployment: Hands-on experience with AWS services (EC2, S3, etc.) , Building and managing cloud-based scalable applications Web Development & API Integration: Backend development with Django, Flask, and FastAPI. Integration and consumption of RESTful APIs. Frontend collaboration and full-stack workflow understanding. AI & Automation: Experience building AI-powered chatbots and assistants using OpenAI and LangChain. Familiarity with Retrieval-Augmented Generation (RAG) architecture. Automation of workflows and intelligent systems using Python. Web Scraping & Data Handling: Real-time and large-scale web scraping using Scrapy, Selenium, and BeautifulSoup. Data extraction and transformation from public and structured sources. Version Control & Testing: Proficient in Git for version control API testing using Postman Preferred Qualifications: Education: A degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience). Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Experience: Python: 2 years (Preferred) Work Location: In person
Posted 1 month ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
We’re Hiring – Tech Lead (Web Scraping) Company: Actowiz Solutions Location: Ahmedabad (Work from Office) Experience Required: 4+ years in IT scraping & 2+ years leading teams of 5+ developers Why Join Us? At Actowiz Solutions, you’ll lead talented developers, solve complex scraping challenges, and deliver high-impact, large-scale data solutions for clients worldwide. We’re looking for a hands-on technical leader who thrives under pressure, adapts quickly, and inspires teams to achieve excellence. What You’ll Do ✅ Lead and mentor a team of 5+ developers in delivering scalable, robust scraping solutions . ✅ Work on advanced scraping modules – Scrapy, threading, requests, web automation . ✅ Handle blocking, captcha solving, reverse engineering, proxy & IP rotation . ✅ Design and manage API integrations, SSL unpinning, Frida, version control, error handling, SQL, MongoDB, Pandas . ✅ Manage projects, documentation, and ensure delivery under tight timelines. ✅ Collaborate with cross-functional teams to maintain high technical standards. Must-Have Skills 🔹 Advanced Python development 🔹 Web scraping architecture & optimization 🔹 Proxy/IP rotation & anti-bot measures 🔹 App automation & API management 🔹 SQL, MongoDB, Pandas 🔹 Leadership: project management, adaptability, accountability Good to Have Linux APPIUM Fiddler Burp Suite 📩 Apply Now: hr@actowizsolutions.com 🌐 Learn More: www.actowizsolutions.com #Hiring #TechLead #WebScraping #Python #Scrapy #DataEngineering #AhmedabadJobs #ActowizSolutions #Leadership #WorkFromOffice
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Actowiz Solutions is Hiring: Senior HR Manager (Technical & Global Hiring) Location: Ahmedabad (Work from Office) Openings: 2 Experience: 5+ years in HR (with strong technical hiring background) At Actowiz Solutions —a leader in web scraping, data analytics, and automation—you’ll own the full HR charter with a sharp focus on: Recruitment: Hire top technical profiles globally (Python/Scrapy developers, web-scraping specialists, data engineers, QA, DevOps). Employee Engagement: Build a high-trust, high-performance culture with programs that retain and grow talent . HR Operations: Drive policies, performance management, compliance, and people analytics across the org. What You’ll Do Lead end-to-end global recruitment : JD design, sourcing, screening, tech assessments, offer management, and onboarding. Build predictive talent pipelines for niche roles; reduce TTH & CPH with data-driven funnels. Partner with leadership on org design, headcount planning, and skill matrices . Run employee engagement calendars (1:1s, pulse/NPS, R&Rs, manager enablement, career paths). Own performance cycles , goal setting (OKRs/KPIs), and improvement plans. Maintain HR policy , compliance, and audit readiness; champion DEI & ethical hiring. Use people analytics to report hiring velocity, attrition, retention risks, and engagement insights. What We’re Looking For 5+ years in HR with 3+ years in technical/global hiring. Proven success hiring top-tier technical talent (Python/Scrapy, data engineering, automation). Strong stakeholder management with tech leaders; assessment design experience a plus. Track record of engagement programs that lift retention and performance. Hands-on with ATS/HRIS , LinkedIn Recruiter, and HR analytics. 📍 Ahmedabad | Full-time | Work from Office 👥 Positions: 2 Ready to build a scalable, engaging, and high-performance organization ? 📩 Apply: hr@actowizsolutions.com 🌐 About us: actowizsolutions.com #Hiring #SeniorHRManager #Recruitment #EmployeeEngagement #HRLeadership #GlobalHiring #TechnicalRecruitment #Python #Scrapy #AhmedabadJobs #ActowizSolutions #WorkFromOffice
Posted 1 month ago
2.0 - 3.0 years
4 - 5 Lacs
Mohali
On-site
Job Title: Full Stack Python Developer Location: Mohali Job Type: Full-time Experience: 2-3 Years Key Responsibilities: Develop, test, and maintain efficient, reusable, and reliable Python code. Collaborate with cross-functional teams to define, design, and ship new features. Integrate user-facing elements developed by front-end developers with server-side logic. Build and maintain RESTful APIs and third-party integrations. Optimize applications for performance, scalability, and security. Troubleshoot and debug existing applications. Write clean, scalable, and well-documented code. Participate in code reviews, testing, and documentation. Required Skills & Qualifications: Education: Bachelor's degree in Computer Science or related field (or equivalent experience). Experience: 2-3 years as a Full Stack Developer specializing in Python, Django, and React.js. Technical Skills: Backend: Python (Django/Flask), API development, authentication systems. Frontend: React.js, JavaScript, HTML, CSS, Bootstrap. Databases: MySQL, PostgreSQL, MongoDB (design, indexing, optimization). APIs & Integrations: RESTful API development, third-party integrations. Web Scraping: BeautifulSoup, Scrapy, Selenium (data extraction & automation). Cloud & DevOps: AWS, GCP, Docker, Kubernetes (preferred but not mandatory). Agile Development: Experience in an Agile/Scrum environment. Apply Now! If you are passionate about Full Stack Python Developer and want to be a part of a dynamic team, Send your updated resume to hr@swissdigitech.com or contact us at 9877588292. Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹45,000.00 per month Work Location: In person
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
panchkula, haryana
On-site
We are currently seeking a Full Stack Developer with a minimum of 3+ years of experience, who possesses a strong background in Python-based web scraping. Familiarity with affiliate marketing is considered an added advantage for this full-time, on-site position based in Panchkula. Your primary responsibilities will include: - Building and overseeing scalable scraping systems tailored for extensive data extraction - Crafting full-stack web applications encompassing both frontend and backend components - Integrating various affiliate marketing tools, APIs, and tracking systems into the existing infrastructure - Developing dashboards and utilities for analyzing scraped data and monitoring affiliate performance - Collaborating closely with the team to translate data insights into actionable strategies The ideal candidate should demonstrate expertise in the following areas: - Proficiency in Python scraping tools such as Scrapy, BeautifulSoup, and Selenium - Strong backend skills with Python frameworks like Django and Flask - Frontend development experience using technologies like React, Vue, or similar - Knowledge of affiliate platforms and tracking systems such as Impact, CJ, and Awin - Hands-on experience with APIs, databases, and cloud deployment A significant plus would be prior experience in constructing systems that drive digital marketing or affiliate campaigns. If you believe you fit the profile described above, or if you know someone who does, please reach out via direct message or send your CV to admin@chevronmedia.in. Let's collaborate and create impactful solutions together.,
Posted 1 month ago
1.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Be a part of India’s largest and most admired news network! Network18 is India's most diversified Media Company in the fast growing Media market. The Company has a strong Heritage and we possess a strong presence in Magazines, Television and Internet domains. Our brands like CNBC, Forbes and Moneycontrol are market leaders in their respective segments. The Company has over 7,000 employees across all major cities in India and has been consistently managed to stay ahead of the growth curve of the industry. Network 18 brings together employees from varied backgrounds under one roof united by the hunger to create immersive content and ideas. We take pride in our people, who we believe are the key to realizing the organization’s potential. We continually strive to enable our employees to realize their own goals, by providing opportunities to learn, share and grow. Role Overview: We are seeking a passionate and skilled Data Scientist with over a year of experience to join our dynamic team. You will be instrumental in developing and deploying machine learning models, building robust data pipelines, and translating complex data into actionable insights. This role offers the opportunity to work on cutting-edge projects involving NLP, Generative AI, data automation, and cloud technologies to drive business value. Key Responsibilities: Design, develop, and deploy machine learning models, with a strong focus on NLP (including advanced techniques and Generative AI) and other AI applications. Build, maintain, and optimize ETL pipelines for automated data ingestion, transformation, and standardization from various sources Work extensively with SQL for data extraction, manipulation, and analysis in environments like BigQuery. Develop solutions using Python and relevant data science/ML libraries (Pandas, NumPy, Hugging Face Transformers, etc.). Utilize Google Cloud Platform (GCP) services for data storage, processing, and model deployment. Create and maintain interactive dashboards and reporting tools (e.g., Power BI) to present insights to stakeholders. Apply basic Docker concepts for containerization and deployment of applications. Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. Stay abreast of the latest advancements in AI/ML and NLP best practices. Required Qualifications & Skills: 1 to 4 years of hands-on experience as a Data Scientist or in a similar role. Solid understanding of machine learning fundamentals, algorithms, and best practices. Proficiency in Python and relevant data science libraries. Good SQL skills for complex querying and data manipulation. Demonstrable experience with Natural Language Processing (NLP) techniques, including advanced models (e.g., transformers) and familiarity with Generative AI concepts and applications. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications & Skills: Familiarity and hands-on experience with Google Cloud Platform (GCP) services, especially BigQuery, Cloud Functions, and Vertex AI. Basic understanding of Docker and containerization for deploying applications. Experience with dashboarding tools like Power BI and building web applications with Streamlit. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Knowledge of data warehousing concepts and schema design. Experience in designing and building ETL pipelines. Disclaimer: Please note Network18 and related group companies do not use the services of vendors or agents for recruitment. Please beware of such agents or vendors providing assistance. Network18 will not be responsible for any losses incurred. “We correspond only from our official email address”
Posted 1 month ago
2.0 - 4.0 years
0 Lacs
India
On-site
Job Profile Highlights : Position Title : Data Analyst Exp: 2-4 Years Job Location: Ahmedabad (Nava Vadaj ) Perks : 5 Days Working Bi-weekly events Paid sick leaves Casual leaves & CL encashment Employee performance rewards Friendly work culture Medical Insurance Key Responsibilities: Data Collection & Extraction: – Perform data scraping from websites, APIs, and other sources. – Use third-party data extraction tools for gathering structured and unstructured data. – Automate data pulls from various platforms (Google Ads, Facebook Ads, CRM, etc.). Data Processing & ETL: – Design and maintain ETL workflows to extract, transform, and load data from multiple sources. – Clean, normalize, and validate datasets for accuracy and completeness – Manage and update data pipelines for marketing analytics Automation & Integration: – Create workflows using Zapier, n8n, and Make (Integromat) to automate data movement and reporting. – Build integrations between ad platforms, CRM (Zoho One), Google Sheets, and BI tools. Zoho One Expertise: – Manage data within Zoho CRM, Zoho Analytics, Zoho Creator, and other Zoho One applications. – Create custom reports and dashboards in Zoho Analytics. – Support marketing teams with CRM-based data segmentation for campaigns. Data Analysis & Reporting: – Analyze marketing and business data to provide actionable insights. – Create dashboards and visualizations in Business Intelligence tools (e.g., Zoho Analytics, Power BI, Google Data Studio/Looker). – Present findings to the marketing and operations teams with clear recommendations. Required Skills & Experience: Proven experience as a Data Analyst, Data Engineer, or similar role in a marketing agency or tech environment. Strong skills in data scraping (BeautifulSoup, Scrapy, or equivalent tools). Hands-on experience with ETL tools and processes. Experience with third-party data extraction tools (e.g., Phantombuster, Octoparse, Apify). Advanced workflow automation skills in Zapier, n8n, and Make. Deep knowledge of Zoho One ecosystem (Zoho CRM, Analytics, Creator, Flow). Proficiency in SQL and basic scripting (Python preferred) for data manipulation. Experience with BI tools (Zoho Analytics, Power BI, Looker Studio). Strong problem-solving skills and ability to work independently. Preferred Qualifications: Prior experience in a digital marketing agency setting. Familiarity with ad platform APIs (Google Ads, Meta Ads, LinkedIn Ads). Understanding of marketing metrics (CPL, ROAS, CTR, Conversion Rate, etc.). Tools & Platforms You’ll Work With: Zoho One Suite (CRM, Analytics, Creator, Flow) Zapier, n8n, Make (Integromat) Google Sheets, Excel, SQL Databases Data scraping tools (BeautifulSoup, Scrapy, Octoparse, Apify, Phantombuster) BI Tools (Zoho Analytics, Power BI, Looker Studio) Preferred Skills: Basic knowledge of WordPress or content management systems. Familiarity with Google Analytics and keyword research tools. Creative storytelling and content structuring ability. Knowledge of current digital marketing trends.
Posted 1 month ago
3.0 years
0 Lacs
Panchkula, India
On-site
📢 We're Hiring: Full Stack Developer & Python Web Scraper 📍 Location: Panchkula (Full-Time, On-Site) 📅 Experience Required: Minimum 3+ Years We're on the lookout for a skilled Full Stack Developer who also brings strong expertise in Python-based web scraping . If you have experience or understanding of affiliate marketing , that's a major advantage. 👨💻 What you’ll be doing: Building and managing scalable scraping systems for large-scale data extraction Developing full-stack web applications (frontend + backend) Integrating affiliate marketing tools, APIs, and tracking systems Creating dashboards and tools to analyze scraped and affiliate performance data Working closely with the team to turn data into actionable insights ✅ Ideal skill set: Proficiency in Python scraping tools like Scrapy, BeautifulSoup, Selenium Strong backend experience with Python frameworks (Django, Flask) Frontend experience with React, Vue, or similar Knowledge of affiliate platforms, tracking systems (like Impact, CJ, Awin, etc.) Experience working with APIs, databases, and cloud deployment ✨ Bonus if you’ve built systems that power digital marketing or affiliate campaigns. If this sounds like you (or someone you know), feel free to DM me or send your CV to admin@chevronmedia.in . Let’s build something impactful together. #Hiring #FullStackDeveloper #WebScraping #PythonDeveloper #AffiliateMarketing #jobsinpanchkula #TechHiring
Posted 1 month ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
🚀 Actowiz Solutions is Hiring – Lead Data Scraping Engineer!🚀 📍 Ahmedabad | WFO | Full-Time | Senior Role Can you crack captchas, dodge blockers, and lead a team like a pro? Then you might just be **our next tech superhero!🦸♂️ What You’ll Do: ⚡ Lead a squad of 5+ scraping masters ⚡ Build scalable, lightning-fast scraping solutions ⚡ Outsmart captchas & bypass restrictions ⚡ Reverse-engineer, automate, and optimize data flows ⚡ Keep APIs, proxies, and pipelines running 24/7 What We Need: ✅ 4+ years in data scraping (Python, Scrapy, automation tools) ✅ 2+ years of leadership experience ✅ Mastery of captcha-solving, proxy rotation, reverse engineering ✅ SQL, MongoDB, Pandas expertise Bonus Skills: ✨ Linux magic ✨ Appium, Fiddler, Burp Suite experience 📩 Apply Now: 📧 komal.actowiz@gmail.com 📞 8401366964 #ActowizSolutions #Hiring #DataScraping #Python #LeadEngineer #AhmedabadJobs #TechCareers #Automation
Posted 1 month ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Sigmoid enables business transformation using data and analytics, leveraging real-time insights to make accurate and fast business decisions, by building modern data architectures using cloud and open source. Some of the world’s largest data producers engage with Sigmoid to solve complex business problems. Sigmoid brings deep expertise in data engineering, predictive analytics, artificial intelligence, and DataOps. Sigmoid has been recognized as one of the fastest growing technology companies in North America, 2021, by Financial Times, Inc. 5000, and Deloitte Technology Fast 500. Offices: New York | Dallas | San Francisco | Lima | Bengaluru This role is for our Bengaluru office. Why Join Sigmoid? • Sigmoid provides the opportunity to push the boundaries of what is possible by seamlessly combining technical expertise and creativity to tackle intrinsically complex business problems and convert them into straight-forward data solutions. • Despite being continuously challenged, you are not alone. You will be part of a fast-paced diverse environment as a member of a high-performing team that works together to energize and inspire each other by challenging the status quo • Vibrant inclusive culture of mutual respect and fun through both work and play Roles and Responsibilities: • Convert broad vision and concepts into a structured data science roadmap, and guide a team to successfully execute on it. • Handling end-to-end client AI & analytics programs in a fluid environment. Your role will be a combination of hands-on contribution, technical team management, and client interaction. • Proven ability to discover solutions hidden in large datasets and to drive business results with their data-based insights • Contribute to internal product development initiatives related to data science. • Drive excellent project management required to deliver complex projects, including effort/time estimation. • Be proactive, with full ownership of the engagement. Build scalable client engagement level processes for faster turnaround & higher accuracy • Define Technology/ Strategy and Roadmap for client accounts, and guides implementation of that strategy within projects • Manage the team-members, to ensure that the project plan is being adhered to over the course of the project • Build a trusted advisor relationship with the IT management at clients and internal accounts leadership. Mandated Skills: • A B-Tech/M-Tech/MBA from a top tier Institute preferably in a quantitative subject • 14+ years of hands-on experience in applied Machine Learning, AI and analytics • Experience of scientific programming in scripting languages like Python, R, SQL, NoSQL, Spark with ML tools & Cloud Technology (AWS, Azure, GCP) • Experience in Python libraries such as numpy, pandas, scikit-learn, tensor-flow, scrapy, BERT etc. • Strong grasp of depth and breadth of machine learning, deep learning, data mining, and statistical concepts and experience in developing models and solutions in these areas • Expertise with client engagement, understanding complex problem statements, and offering solutions in the domains of Supply Chain, Manufacturing, CPG, Marketing etc. Desired Skills: ● Deep understanding of ML algorithms for common use cases in both structured and unstructured data ecosystems. ● Comfortable with large scale data processing and distributed computing ● Providing required inputs to sales, and pre-sales activities ● A self-starter who can work well with minimal guidance ● Excellent written and verbal communication skills
Posted 1 month ago
0.0 - 2.0 years
0 - 0 Lacs
Mohali, Punjab
On-site
Male applicants are preferred We are looking for an enthusiastic and proactive Python Developer to join our development team. In this role, you’ll help build and maintain backend services, work on API integrations, and collaborate closely with frontend developers to ensure seamless functionality between backend and ReactJS applications. You will have the opportunity to learn from experienced developers and gain hands-on experience in developing scalable solutions that drive real-world impact. Experience Required: 2-3 Years Mode of Work: On-Site Only (Mohali, Punjab) Mode of Interview : Face to Face( On-Site) Contact for Queries: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Note: This number will be unavailable on weekends and public holidays. Key Responsibilities: Backend Development: Assist in the development of clean, efficient, and scalable Python applications to meet business needs. API Integration: Support the creation, management, and optimization of RESTful APIs to connect backend and frontend components. Collaboration: Work closely with frontend developers to integrate backend services into ReactJS applications, ensuring smooth data flow and functionality. Testing and Debugging: Help with debugging, troubleshooting, and optimizing applications for performance and reliability. Code Quality: Write readable, maintainable, and well-documented code while following best practices. Participate in code reviews to maintain high coding standards. Learning and Development: Continuously enhance your skills by learning new technologies and methodologies, contributing ideas to improve development processes. Required Skills and Experience Problem Solving: Strong analytical skills with an ability to identify and resolve issues effectively. Teamwork: Ability to communicate clearly and collaborate well with cross-functional teams. Programming Languages: Python (Core and Advanced) , JavaScript , HTML, CSS Frameworks: Django , Flask , FastAPI , LangChain Libraries & Tools: Pandas, NumPy , Selenium, Scrapy, BeautifulSoup , Git, Postman, OpenAI API, REST APIs Databases : MySQL , PostgreSQL , SQLite Cloud & Deployment: Hands-on experience with AWS services (EC2, S3, etc.) , Building and managing cloud-based scalable applications Web Development & API Integration: Backend development with Django, Flask, and FastAPI. Integration and consumption of RESTful APIs. Frontend collaboration and full-stack workflow understanding. AI & Automation: Experience building AI-powered chatbots and assistants using OpenAI and LangChain. Familiarity with Retrieval-Augmented Generation (RAG) architecture. Automation of workflows and intelligent systems using Python. Web Scraping & Data Handling: Real-time and large-scale web scraping using Scrapy, Selenium, and BeautifulSoup. Data extraction and transformation from public and structured sources. Version Control & Testing: Proficient in Git for version control API testing using Postman Preferred Qualifications: Education: A degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience). Cloud Experience: Basic knowledge of cloud platforms like AWS, Google Cloud, or Azure is a plus. Agile Experience: Familiarity with Agile/Scrum methodologies is a plus. Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Experience: Python: 2 years (Required) Work Location: In person
Posted 1 month ago
3.0 - 6.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Job Title: Senior Data Engineer Location: Mumbai Position Overview: We are looking for a talented and experienced Senior Data Engineer to join our team. The ideal candidate will have strong programming skills in Python, experience building scalable data pipelines, and a passion for transforming raw data into actionable insights. You will work on complex data challenges, integrate data from diverse sources, and collaborate with internal stakeholders to deliver high-impact solutions. Educational Qualifications: Bachelor’s degree in Engineering , Computer Science , Information Technology , or a related field. A Master’s degree in the relevant domain is an added advantage. Degree must be from a recognized university. Experience Required: 3 to 6 years of relevant experience in data engineering or related roles. Key Skills & Competencies: Strong programming skills in Python , with sound knowledge of Object-Oriented Programming (OOP) . Ability to write reusable, scalable, testable, and efficient code. Hands-on experience or working knowledge of web frameworks like Django or Flask (preferred). Experience with web scraping using tools like BeautifulSoup, Scrapy, or Selenium. Proficiency in working with both SQL and NoSQL databases. Familiarity with big data technologies like Hadoop , Spark , Talend , etc. Experience in building ETL pipelines and integrating data from various sources (web services, file systems, APIs). Understanding of HTML/XML parsing . Ability to handle and process both structured and unstructured data (text, images, etc.). Good to have: solid grasp of Data Structures and Algorithms . Comfortable working with Microsoft Excel for data transformation and automation. Key Responsibilities: Design, build, and maintain analytical data systems and data pipelines . Collect, clean, transform, and analyse large datasets from multiple sources. Convert data from MS Excel into Python DataFrames or similar structures for automation and analysis. Develop ETL scripts to enable smooth data flow across systems. Collaborate with business stakeholders and engineering teams to define and implement data solutions. Translate business requirements into technical specifications. Follow best coding practices and ensure adherence to coding standards . Ensure all non-functional requirements (scalability, performance, security) are met. Participate in interface design , reporting framework development, and enhancements. Maintain accurate and up-to-date technical documentation . Provide regular project/task updates to the Project Manager . Offer constructive feedback and mentorship where applicable.
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a Web Scraping Engineer, you will join our dynamic team and play a crucial role in driving our data-driven strategies. Your primary responsibility will be to develop and maintain innovative solutions to automate data extraction, parsing, and structuring from various online sources. By utilizing your expertise in web scraping, you will empower our business intelligence, market research, and decision-making processes. Key Responsibilities - Design, implement, and maintain web scraping solutions to collect structured data from publicly available online sources and APIs. - Parse, clean, and transform extracted data to ensure accuracy and usability for business needs. - Store and organize collected data in databases or spreadsheets for easy access and analysis. - Monitor and optimize scraping processes for efficiency, reliability, and compliance with relevant laws and website policies. - Troubleshoot issues related to dynamic content, anti-bot measures, and changes in website structure. - Collaborate with data analysts, scientists, and other stakeholders to understand data requirements and deliver actionable insights. - Document processes, tools, and workflows for ongoing improvements and knowledge sharing. Requirements - Proven experience in web scraping, data extraction, or web automation projects. - Proficiency in Python or similar programming languages, and familiarity with libraries such as BeautifulSoup, Scrapy, or Selenium. - Strong understanding of HTML, CSS, JavaScript, and web protocols. - Experience with data cleaning, transformation, and storage (e.g., CSV, JSON, SQL/NoSQL databases). - Knowledge of legal and ethical considerations in web scraping, with a commitment to compliance with website terms of service and data privacy regulations. - Excellent problem-solving and troubleshooting skills. - Ability to work independently and manage multiple projects simultaneously. Preferred Qualifications - Experience with cloud platforms (AWS, GCP, Azure) for scalable data solutions. - Familiarity with workflow automation and integration with communication tools (e.g., email, Slack, APIs). - Background in market research, business intelligence, or related fields. Skills: data extraction, data cleaning, BeautifulSoup, business intelligence, web automation, JavaScript, web scraping, data privacy regulations, web protocols, Selenium, Scrapy, SQL, data transformation, NoSQL, CSS, market research, automation, Python, HTML.,
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Senior Python Programmer to join its team of experts. Skill : Senior Python Programmer Exp: 5+Yrs NP: Immediate to 15Days Location: Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, start-ups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 1 month ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Company : Actowiz Solutions Location : Ahmedabad Job Type : Full-time Working Days : 5 Days a Week. About Us Actowiz Solutions is a leading provider of data extraction, web scraping, and automation solutions. We empower businesses with actionable insights by delivering clean, structured, and scalable data through cutting-edge technology. Join our fast- growing team and lead projects that shape the future of data intelligence. Role Overview We are looking for a highly skilled Python Developer with expertise in web scraping, automation tools, and related frameworks. Key Responsibilities • Design, develop, and maintain scalable web scraping scripts and frameworks. • Work with tools and libraries such as Scrapy, BeautifulSoup, Selenium, Playwright, Requests, etc. • Implement robust error handling, data parsing, and storage mechanisms (JSON, CSV, databases, etc.). • Optimize scraping performance and ensure compliance with legal and ethical scraping practices. • Collaborate with product managers, QA, and DevOps teams to ensure timely delivery. • Research new tools and techniques to improve scraping efficiency and scalability. Requirements • 2+ years of experience in Python development with strong expertise in web scraping. • Proficiency in scraping frameworks like Scrapy, Playwright, or Selenium. • Experience with REST APIs, asynchronous programming, and multithreading. • Familiarity with databases (SQL/NoSQL) and cloud-based data pipelines. • Ability to manage deadlines and deliverables in an Agile environment. Preferred Qualifications • Prior experience leading a team or managing technical projects. • Knowledge of DevOps tools (Docker, CI/CD) is a plus. Benefits • Competitive salary. • 5-day work week (Monday–Friday) • Flexible work environment • Opportunities for growth and skill development
Posted 1 month ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Location : Ahmedabad/WFO Experience Level : Senior (4+ years) Employment Type : Full-time Job Summary : • We are seeking a highly skilled and experienced Lead Data Scraping Engineer to join our team. The ideal candidate will have a minimum of 4 years of hands-on experience in IT scraping, with at least 2 years leading a team of 5+ developers. This role requires deep technical knowledge in advanced scraping techniques, reverse engineering, automation, and leadership skills to drive the team towards success. Key Responsibilities: • Design and develop scalable data scraping solutions using tools like Scrapy and Python libraries. • Lead and mentor a team of 5+ developers, managing project timelines and deliverables. • Implement advanced blocking and captcha-solving techniques to bypass scraping restrictions. • Conduct source code reverse engineering and automate web and app interactions. • Manage proxies, IP rotation, and SSL unpinning to ensure effective scraping. • Maintain and improve API integrations and data pipelines. • Ensure code quality through effective version control, error handling, and documentation. • Collaborate with cross-functional teams for project planning and execution. • Monitor performance and provide solutions under high-pressure environments. Required Skills and Experience: • Data Scraping: Minimum 4 years in IT scraping industry • Leadership: Minimum 2 years leading a team of 5+ developers • Scraping Tools: Scrapy, Threading, requests, web automation • Technical Proficiency: o Advanced Python o Captcha solving and blocking handling o Source reverse engineering o Proxy management & IP rotation o App automation, SSL Unpin, Frida o API Management, Version Control Systems o Error Handling, SQL, MongoDB, Pandas Leadership Skills: • Basic project management • Moderate documentation • Team handling • Pressure management • Flexibility and adaptability • High accountability Preferred (Good to Have): • Experience with Linux • Knowledge of Appium, Fiddler, Burp Suite
Posted 1 month ago
3.0 years
0 Lacs
India
Remote
Location: [Remote / India] Job Type: [Full-time] Experience: 3+ years in web crawling/scraping, backend systems, and data extraction About Client Our Client is a modern, meaning‑based web-search API designed specifically for AI applications—such as retrieval‑augmented generation (RAG). Unlike traditional keyword-based engines, they use embedding-based semantic search, allowing developers to fetch content that’s contextually relevant and up-to-date About the Role: We are looking for a skilled Web Crawler Engineer to design, develop, and maintain scalable web crawling and scraping systems. The ideal candidate should be well-versed in handling large-scale data extraction, parsing unstructured web data, dealing with anti-bot mechanisms, and managing crawling infrastructure. We are looking for someone who has done web crawling before. Must be able to work a lot. Crawl 3M URLs per hour to add to the in-house index. Must be good/interested in high-performance engineering as the company scales its vector DB. Unlimited compute for you to do the biggest web crawling push in your career. Key Responsibilities: Develop robust, scalable, and efficient web crawlers to extract structured/unstructured data from dynamic websites. Design and implement data pipelines to process, clean, and store scraped data in databases or data lakes. Monitor and maintain crawling systems to ensure reliability, data accuracy, and performance. Handle anti-bot measures (e.g., CAPTCHAs, IP blocks, dynamic content loading) using techniques like headless browsing, proxies, and rotating user agents. Ensure compliance with site-specific terms of service and data privacy policies. Collaborate with data scientists, backend engineers, and product managers to support business goals through reliable data feeds. Required Skills: Strong experience with web scraping tools/frameworks (e.g., Scrapy, Puppeteer, Selenium, Playwright). Proficiency in Python is essential; familiarity with JavaScript or Go is a plus. Hands-on experience in parsing HTML/XML/JSON using BeautifulSoup, lxml, or similar libraries. Minimum 3 years of experience with headless browsers and automation tools (e.g., Puppeteer or Playwright). Good understanding of networking, HTTP protocols, headers, cookies, and sessions. Familiarity with databases (SQL or NoSQL – e.g., PostgreSQL, MongoDB, Elasticsearch). Experience using proxies, VPNs, and user-agent rotation to bypass crawling limitations. Familiarity with task queues and schedulers (e.g., Celery, Airflow, Cron). Understanding of cloud services (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes) is a plus. Preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, or related field. Experience handling large-scale crawls (millions of pages per day). Knowledge of ethical scraping practices and legal considerations (e.g., robots.txt, GDPR). Exposure to data pipelines and distributed systems (e.g., Kafka, Spark). Tools & Technologies (Nice to Have): Scrapy, Selenium, Puppeteer, Playwright Python, JavaScrip Beautiful Soup, lxml Redis, Kafka, PostgreSQL, MongoDB AWS/GCP, Docker, Git Airflow, Jenkins What We Offer: Competitive salary and performance-based incentives Opportunity to work on impactful data engineering problems Flexible work hours and remote-first culture Learning and development allowance Collaborative and inclusive team environment
Posted 1 month ago
2.0 years
0 Lacs
India
On-site
About Us: We are an innovative and data-driven organization committed to harnessing the power of data to drive strategic decisions, efficiency, and automation. Our mission is to transform complex data into actionable insights, fueling growth and innovation. Position Overview: We are seeking a skilled Web Scraper & Data Automation Specialist to join our dynamic data operations team. The successful candidate will play a pivotal role in extracting, structuring, and automating data retrieval from diverse sources, ensuring its accuracy, relevance, and usability for internal analytics and strategic projects. Responsibilities: Design, develop, and maintain web scraping scripts and automation tools for data extraction from various websites, APIs, and online platforms. Implement advanced data scraping techniques to handle large datasets, dynamic content, anti-scraping mechanisms, and complex web structures. Clean, transform, structure, and validate extracted data, ensuring its accuracy, consistency, and readiness for analysis. Collaborate with data analysts, engineers, and product managers to identify data needs and define scraping strategies. Continuously monitor, maintain, and enhance existing scraping tools to adapt to changes in source websites or APIs. Leverage open-source libraries and tools such as Scrapy, Beautiful Soup, Selenium, Requests, Puppeteer, and other relevant technologies. Automate end-to-end data collection processes, scheduling scrapers to run efficiently and reliably. Conduct regular audits of data quality, troubleshoot and resolve issues in scraping operations, and document solutions. Stay up-to-date with emerging technologies, trends, and best practices in web scraping, data extraction, and data automation. Requirements: Bachelor’s degree or higher in Computer Science, Information Technology, Data Science, or a related field, or equivalent practical experience. 2+ years of proven experience in web scraping, data extraction, and data automation roles. Demonstrated proficiency in programming languages such as Python, JavaScript, or similar scripting languages. Solid experience with web scraping frameworks and tools like Scrapy, Selenium, Puppeteer, Beautiful Soup, or similar. Experience handling and processing data formats such as JSON, XML, CSV, HTML, and APIs. Ability to structure, cleanse, and standardize raw data into structured formats for analysis. Knowledge of databases and data storage solutions such as SQL, NoSQL, MongoDB, PostgreSQL, or similar. Familiarity with cloud-based data processing environments (AWS, Azure, Google Cloud) is highly desirable. Strong analytical and problem-solving skills with an eye for detail and accuracy. Excellent organizational, time-management, and communication skills. Proven ability to work independently and collaboratively in a team environment. Preferred Skills: Experience with Docker or Kubernetes for deploying scalable scraping solutions. Familiarity with version control systems like Git. Exposure to big data technologies such as Apache Spark, Hadoop, or similar frameworks. Understanding of data privacy, compliance, and ethical scraping practices. Why Join Us: Be part of a forward-thinking, data-centric organization. Work with cutting-edge tools and technologies. Collaborative, diverse, and inclusive team culture. Opportunities for continuous learning, growth, and professional development. If you're passionate about data, automation, and pushing the boundaries of web scraping and data engineering, we'd love to connect with you! How to Apply: Please submit your resume and portfolio showcasing relevant projects and achievements related to web scraping and data automation.
Posted 1 month ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title :: Python Developer Location :: Gurugram, India (Hybrid) Minimum 4 years of relevant experience Job Description: Strong understanding of Python and web scraping techniques, with experience in frameworks such as Scrapy or Selenium. Knowledge of various Python libraries, APIs, and toolkits, including hands-on experience with Pandas, databases, and SQL server. Proficient in data extraction methods (e.g., PDF data extraction, Excel automation) and code versioning tools (e.g., Git). Skilled in writing scalable code, testing, and debugging applications to ensure quality and functionality. Experience in developing back-end components and integrating data storage solutions while optimizing applications for maximum speed and scalability. Collaborate with cross-functional teams to define and implement solutions, contributing to continuous improvement through code reviews. Required: Experience: 5 years in Python development. Looking for a talented programmer to create, debug, and enhance secure and functional code. Candidate should have experience working on Python and be able to design and build superior and innovative tools by writing clean and flawless code. Additionally, the candidate should contribute to the maintenance of existing tools for business continuity purposes. Analytical Thinking: Ability to understand, create, manipulate, and debug codes. Soft Skills: Strong problem-solving abilities, adaptability, and a proactive approach to learning new technologies. Certifications: Relevant certifications (e.g., AWS Certified Developer, Microsoft Certified: Azure Developer Associate) are a plus. About Ascendion: Ascendion is transforming the future of technology with AI-driven software engineering. Our global team accelerates innovation and delivers future-ready solutions for some of the world’s most important industry leaders. Our applied AI, software engineering, cloud, data, experience design, and talent transformation capabilities accelerate innovation for Global 2000 clients. Join us to build transformative experiences, pioneer cutting-edge solutions, and thrive in a vibrant, inclusive culture - powered by AI and driven by bold ideas.
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About The Role We are seeking a highly skilled Web Scraping & Python API Developer to build and maintain scalable data extraction systems from various websites and APIs. The ideal candidate has hands-on experience with web scraping frameworks, RESTful API development, and data integration techniques. Responsibilities Design and develop robust, scalable web scraping scripts using Python (e.g., Scrapy, BeautifulSoup, Selenium). Build and maintain RESTful APIs to serve scraped data to internal systems or clients. Handle anti-bot mechanisms like CAPTCHAs, JavaScript rendering, and IP rotation. Optimize scraping processes for speed, reliability, and data integrity. Parse and normalize structured and unstructured data (HTML, JSON, XML). Monitor and maintain scraping pipelines; handle failures and site structure changes. Implement logging, error handling, and reporting mechanisms. Collaborate with product managers and data analysts to define data requirements. Ensure compliance with website terms of service and data use regulations. Requirements 3+ years of experience with Python, especially in data extraction and web automation. Strong knowledge of web scraping libraries (Scrapy, BeautifulSoup, Requests, Selenium). Experience with REST API development (FastAPI, Flask, or Django REST Framework). Proficient with data handling libraries (Pandas, JSON, Regex). Experience working with proxies, headless browsers, and CAPTCHA solving tools. Familiarity with containerization (Docker) and deployment on cloud platforms (AWS, GCP, Azure). Strong understanding of HTML, CSS, JavaScript (from a scraping perspective). Experience with version control (Git) and agile development methodologies. Nice To Have Experience with GraphQL scraping. Familiarity with CI/CD pipelines and DevOps tools. Knowledge of data storage solutions (PostgreSQL, MongoDB, Elasticsearch). Prior experience with large-scale web crawling infrastructure. Benefits Competitive salary and performance bonuses. Flexible work hours and remote work option. Opportunity to work on high-impact, data-driven products. Learning budget for conferences, books, and courses. (ref:hirist.tech)
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a Python Web Scraping & AI Engineer at ZANG - an AI-powered E-commerce Search Engine based in Mumbai, you will play a crucial role in revolutionizing the e-commerce landscape by contributing to the development of cutting-edge web scraping infrastructure and AI algorithms. ZANG is a dynamic startup that is focused on innovation, and we are seeking a skilled individual like you to enhance our search capabilities. Your primary responsibilities will include designing, building, and maintaining scalable web scraping solutions using Python to extract structured data from various e-commerce websites. You will be tasked with handling challenges such as CAPTCHA, rotating proxies, and anti-scraping techniques to ensure data integrity. Additionally, you will work on integrating machine learning models and algorithms into the search engine for recommendation, ranking, and personalization purposes. Collaboration with front-end and back-end developers will be essential to integrate scraped data and AI models into the user-facing product. You will be required to write clean, scalable, and well-documented code following best practices and provide regular updates on project milestones and deliverables. Furthermore, you will be responsible for designing and managing data storage systems and optimizing data pipelines for fast querying and retrieval by the search engine. To be successful in this role, you should have proficiency in Python and related libraries such as BeautifulSoup, Scrapy, Selenium, or similar tools. Experience in web scraping, familiarity with AI/ML libraries like TensorFlow, PyTorch, and knowledge of NLP techniques will be advantageous. Strong understanding of REST APIs, version control systems, and experience with Vector/NoSQL databases are also required qualifications. If you have a passion for working with cutting-edge AI and data scraping technologies, then this position offers you a competitive salary, stock options, and the opportunity to work in a fast-paced startup environment with significant growth potential. To apply for this exciting opportunity, please email your resume to amit@inventiway.in and we will schedule a time to discuss this position in detail.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
Arrk Group India is a technology company with over 23 years of industry experience, specializing in developing scalable platforms to meet the needs of customers and enabling organizations to implement disruptive technologies for business transformation. The focus is on helping both public and private sector customers globally transition to digital environments. As a Senior AI Engineer at Arrk Group India, you will be responsible for AI Engineering System Architecture and Development. This includes understanding generative AI, large language models, and other foundation models for various business applications. You should possess demonstrable knowledge and practical experience in using models programmatically via APIs and have a strong understanding of AI engineering solutions architectures such as RAG. Your practical AI skills should include evaluating models for specific use cases, knowledge of databases like SQL, Postgres, and vector databases like pgvector and ChromaDB. You will explore, analyze, and visualize data at scale to identify differences in data distribution that could impact model performance. Additionally, you will need to verify data quality, clean data, establish ground truth, synthesize data, and organize data for fine-tuning. Proficiency in deploying large language models to production, training models, tuning hyperparameters, and selecting hardware for running ML models with the required latency is essential. You should also be able to analyze machine learning algorithms for problem-solving and rank them based on success probability. Knowledge of OpenCV and familiarity with deep learning frameworks like TensorFlow, PyTorch, or similar is desired, along with experience in AWS Cloud and Sagemaker. In addition to technical skills, you should be comfortable with rapid prototyping, possess strong problem-solving and analytical skills, and have familiarity with version control systems like Git. Proficiency in Python and basic ML libraries, understanding of NLP frameworks such as Spacy, and familiarity with REST APIs are necessary. Knowledge of crawling frameworks like Scrapy, Linux OS, and excellent communication skills are also required. If you meet the specified criteria and are interested in joining a dynamic team working on cutting-edge AI solutions, please share your updated CV to kajal.uklekar@arrkgroup.com. We are looking forward to connecting with talented AI professionals who can contribute to our innovative projects. Thank you, Kajal Uklekar Senior Talent Manager,
Posted 1 month ago
3.0 years
0 Lacs
Gurgaon
On-site
Job Title: Data Analyst – D2C eCommerce Location: Gurgaon, Haryana Experience: 3+ Years (Mandatory experience in D2C eCommerce industry) About the Role: We are looking for an experienced Data Analyst with a strong background in the Direct-to-Consumer (D2C) eCommerce industry . The ideal candidate will have proven expertise in web scraping , data processing , and visualization tools to derive actionable insights. This role demands a proactive thinker who understands eCommerce metrics, consumer behavior, and performance tracking. Key Responsibilities: Collect, clean, and process large datasets from various eCommerce platforms, CRMs, and online sources. Perform web scraping to gather competitor pricing, product listings, and customer review data. Create interactive dashboards and visual reports using Power BI , Tableau , or similar tools. Analyze marketing, sales, and website traffic data to identify trends and business opportunities. Collaborate with marketing, product, and tech teams to support data-driven decision-making. Monitor and track key performance indicators (KPIs) for D2C brands (CAC, LTV, conversion rates, ROAS, etc.). Generate weekly/monthly performance reports for senior management. Build predictive models and segmentation analysis to support customer retention and growth. Required Skills & Qualifications: Bachelor's or Master’s degree in Computer Science, Statistics, Mathematics, or related field. 3+ years of experience as a Data Analyst in a D2C eCommerce company (mandatory). Strong experience in web scraping using Python (BeautifulSoup, Scrapy, Selenium, etc.). Proficiency in SQL , Excel, and data processing libraries (Pandas, NumPy). Hands-on experience with data visualization tools like Tableau, Power BI, or Google Data Studio. Knowledge of eCommerce tools/platforms such as Shopify, WooCommerce, Amazon, etc. Strong analytical and problem-solving skills with attention to detail. Excellent communication skills and ability to present complex data in a simplified manner. email: etalenthire@ gmail.com satish: 88O2749743 Job Type: Full-time Pay: ₹10,013.51 - ₹75,678.78 per month Schedule: Day shift Ability to commute/relocate: Gurgaon, Haryana: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Do you have experience in D2C Ecommerce industry ? company name ? Current salary ? Expected salary ? Notice period ? Current Location ? Would you be comfortable with job location (Gurgaon) ? Experience: Data analytics: 3 years (Preferred) Web scrapping: 3 years (Preferred) Work Location: In person
Posted 1 month ago
0.0 - 3.0 years
0 - 0 Lacs
Gurugram, Haryana
On-site
Job Title: Data Analyst – D2C eCommerce Location: Gurgaon, Haryana Experience: 3+ Years (Mandatory experience in D2C eCommerce industry) About the Role: We are looking for an experienced Data Analyst with a strong background in the Direct-to-Consumer (D2C) eCommerce industry . The ideal candidate will have proven expertise in web scraping , data processing , and visualization tools to derive actionable insights. This role demands a proactive thinker who understands eCommerce metrics, consumer behavior, and performance tracking. Key Responsibilities: Collect, clean, and process large datasets from various eCommerce platforms, CRMs, and online sources. Perform web scraping to gather competitor pricing, product listings, and customer review data. Create interactive dashboards and visual reports using Power BI , Tableau , or similar tools. Analyze marketing, sales, and website traffic data to identify trends and business opportunities. Collaborate with marketing, product, and tech teams to support data-driven decision-making. Monitor and track key performance indicators (KPIs) for D2C brands (CAC, LTV, conversion rates, ROAS, etc.). Generate weekly/monthly performance reports for senior management. Build predictive models and segmentation analysis to support customer retention and growth. Required Skills & Qualifications: Bachelor's or Master’s degree in Computer Science, Statistics, Mathematics, or related field. 3+ years of experience as a Data Analyst in a D2C eCommerce company (mandatory). Strong experience in web scraping using Python (BeautifulSoup, Scrapy, Selenium, etc.). Proficiency in SQL , Excel, and data processing libraries (Pandas, NumPy). Hands-on experience with data visualization tools like Tableau, Power BI, or Google Data Studio. Knowledge of eCommerce tools/platforms such as Shopify, WooCommerce, Amazon, etc. Strong analytical and problem-solving skills with attention to detail. Excellent communication skills and ability to present complex data in a simplified manner. email: etalenthire@ gmail.com satish: 88O2749743 Job Type: Full-time Pay: ₹10,013.51 - ₹75,678.78 per month Schedule: Day shift Ability to commute/relocate: Gurgaon, Haryana: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Do you have experience in D2C Ecommerce industry ? company name ? Current salary ? Expected salary ? Notice period ? Current Location ? Would you be comfortable with job location (Gurgaon) ? Experience: Data analytics: 3 years (Preferred) Web scrapping: 3 years (Preferred) Work Location: In person
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
navi mumbai, maharashtra
On-site
Spherex is seeking an Artificial Intelligence (AI) Machine Learning (ML) Engineer to contribute to the development, enhancement, and expansion of our product platform catering to the Media and Entertainment sector. As the AI/ML Engineer, your duties will involve the creation of machine learning models and system retraining. The position is based in Navi Mumbai, India. The ideal candidate should hold a degree in computer science or software development. Proficiency in .Net, Azure, Project management, Team and Client management is essential. Additionally, familiarity with Python, Tensorflow, Pytorch, MySQL, Artificial Intelligence, and Machine Learning is desired. Key requirements for this role include expertise in Python with OOPS concepts, a solid foundation in Natural Language Understanding, Machine Learning, and Artificial Intelligence. Knowledge of ML/DL libraries such as Numpy, Pandas, Tensorflow, Pytorch, Keras, scikit-learn, Jupyter, and spaCy/NLTK is crucial. Hands-on experience with MySQL and NoSQL databases, along with proficiency in scraping tools like BeautifulSoup and Scrapy, is also required. The successful candidate should have experience in web development frameworks like Django and Flask, as well as working with RESTful APIs using Django. Familiarity with end-to-end data science pipelines, strong unit testing and debugging abilities, and applied statistical skills are necessary. Proficiency in Git, Linux OS, ML architectures, and approaches including object detection, semantic segmentation, classification, regression, RNNs, and data fusion is expected. Knowledge of OpenCV, OCR, Yolo, Docker, Kubernetes, ETLPentaho is considered a plus. Candidates must possess a minimum of 4+ years of experience in advanced AI/ML projects within commercial environments. Experience in utilizing AI/ML for video and audio content analysis is advantageous. Education-wise, a college degree in computer science or software development is required, along with excellent documentation and effective communication skills in both technical and non-technical contexts.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |