Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Product & Manufacturer Data Validation Specialist, your role will involve validating manufacturer information and ensuring accurate product taxonomy classification for industrial parts. You will work with AI-generated data, Excel workbooks, and online resources to enhance the integrity of product and supplier databases. Here are the key responsibilities: - Manufacturer Data Validation: - Review manufacturer names extracted via AI-assisted web search and assess accuracy. - Conduct manual research to confirm manufacturer identities and correct discrepancies. - Update Excel workbooks with verified and standardized manufacturer data. - Flag ambiguous or unverifiable entries for further investigation. - Provide feedback to improve AI data extraction and enrichment workflows. - Product Taxonomy Validation: - Validate and refine product taxonomy for industrial parts, ensuring alignment with internal classification standards. - Review AI-generated or vendor-supplied categorizations and correct misclassifications. - Research product attributes to determine appropriate category placement. - Document taxonomy rules, exceptions, and updates for reference and training. - Support taxonomy mapping for new product onboarding and catalog expansion. Qualifications Required: - Strong proficiency in Microsoft Excel, including formulas, filters, and data management tools. - Excellent research and analytical skills with a high attention to detail. - Experience working with AI-generated data or web scraping tools is a plus. - Familiarity with manufacturer databases, industrial parts, or technical product data. - Understanding of product taxonomy, classification systems, and metadata. - Ability to work independently and manage multiple tasks effectively. - Strong communication skills for cross-functional collaboration.,
Posted 2 days ago
2.0 - 5.0 years
6 - 16 Lacs
hyderabad, chennai, bengaluru
Work from Office
Python programming,data science methodologies AI systems or chatbot development Deep learning, transformer, GAN’s , VAE’s and LLM processing, prompt engineering TensorFlow, PyTorch, spaCy send cv - sairamglobal.hr@gmail.com Required Candidate profile Education 15 years full time regular study required Notice period - 0 to 45 days Location- Bangalore, Hyderabad, Chennai Good Communication skills required
Posted 2 days ago
3.0 - 7.0 years
4 - 7 Lacs
thane, navi mumbai, mumbai (all areas)
Work from Office
Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as Beautiful Soup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like Beautiful Soup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environmen
Posted 3 days ago
1.0 - 3.0 years
2 - 5 Lacs
mumbai
Work from Office
We are seeking an experienced Web Scraping Engineer with deep expertise in Scrapy to develop, maintain, and optimize web crawlers. The ideal candidate will have a strong background in extracting, processing, and managing large-scale web data efficiently. Responsibilities : - Write and maintain web scraping scripts using Python Optimize custom web scraping tools and workflows. - Familiarity with Python and web scraping frameworks (Scrapy, Selenium, BeautifulSoup, Requests, Playwright). - Troubleshoot and resolve scraping challenges, including CAPTCHAs, rate limiting, and IP blocking. - Experience with proxy management (Rotating IPs, VPNs, Residential proxies, etc.) - Handle dynamic content using headless browsers and solve CAPTCHAs & IP bans. - Collaborate with senior developers to improve code quality and efficiency. Skills Required : - Python and scraping libraries such as Scrapy and BeautifulSoup - Experience with proxy management, CAPTCHA bypass techniques, and anti-bot evasion. - Ability to optimize crawlers for performance and minimize website detection risks. - Strong background in databases and data storage systems . - Prior Experience of scraping large-scale E-Commerce websites would be plus. - Willingness to learn and take feedback constructively. - Excellent communication and leadership abilities.
Posted 3 days ago
2.0 - 4.0 years
4 - 6 Lacs
hyderabad
Work from Office
Responsibilities: * Design, develop, test & maintain Python applications using Django framework * Collaborate with cross-functional teams on project requirements & deliverables
Posted 3 days ago
0.0 - 3.0 years
2 - 4 Lacs
bengaluru
Work from Office
We are hiring a Python Developer (0.52 yrs) for web scraping. Responsibilities: build & optimize scrapers, handle dynamic sites, proxies & CAPTCHAs, ensure data accuracy. Skills: Python, Scrapy, BeautifulSoup, Selenium, regex, debugging. Provident fund Health insurance
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
As an entrepreneurial, passionate, and driven Data Engineer at Startup Gala Intelligence backed by Navneet Tech Venture, you will play a crucial role in shaping the technology vision, architecture, and engineering culture of the company right from the beginning. Your contributions will be foundational in developing best practices and establishing the engineering team. **Key Responsibilities:** - **Web Scraping & Crawling:** Build and maintain automated scrapers to extract structured and unstructured data from websites, APIs, and public datasets. - **Scalable Scraping Systems:** Develop multi-threaded, distributed crawlers capable of handling high-volume data collection without interruptions. - **Data Parsing & Cleaning:** Normalize scraped data, remove noise, and ensure consistency before passing to data pipelines. - **Anti-bot & Evasion Tactics:** Implement proxy rotation, captcha solving, and request throttling techniques to handle scraping restrictions. - **Integration with Pipelines:** Deliver clean, structured datasets into NoSQL stores and ETL pipelines for further enrichment and graph-based storage. - **Data Quality & Validation:** Ensure data accuracy, deduplicate records, and maintain a trust scoring system for data confidence. - **Documentation & Maintenance:** Keep scrapers updated when websites change, and document scraping logic for reproducibility. **Qualifications Required:** - 2+ years of experience in web scraping, crawling, or data collection. - Strong proficiency in Python (libraries like BeautifulSoup, Scrapy, Selenium, Playwright, Requests). - Familiarity with NoSQL databases (MongoDB, DynamoDB) and data serialization formats (JSON, CSV, Parquet). - Experience in handling large-scale scraping with proxy management and rate-limiting. - Basic knowledge of ETL processes and integration with data pipelines. - Exposure to graph databases (Neo4j) is a plus. As part of Gala Intelligence, you will be working in a tech-driven startup dedicated to solving fraud detection and prevention challenges. The company values transparency, collaboration, and individual ownership, creating an environment where talented individuals can thrive and contribute to impactful solutions. If you are someone who enjoys early-stage challenges, thrives on owning the entire tech stack, and is passionate about building innovative, scalable solutions, we encourage you to apply. Join us in leveraging technology to combat fraud and make a meaningful impact from day one.,
Posted 4 days ago
6.0 - 11.0 years
6 - 10 Lacs
pune
Work from Office
Position Description: Founded in 1976, CGI is among the world's largest independent IT and business consulting services firms. With 94,000 consultants and professionals globally, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services, and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68billion, and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more atcgi.com. Job Title: Senior Data Engineer Position: SSE LA AC Experience: 6+ years of experience Category: Software Development Job location: Pune Position ID: J0825-0171 Work Type: Hybrid Employment Type: Full Time Permanent Qualification: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Required Skills & Experience Proficient in Python, with deep experience using pandas or polars Strong understanding of ETL development, data extraction, and transformation Hands-on experience with SQL and querying large datasets Experience deploying workflows on Apache Airflow Familiar with web scraping techniques (Selenium is a plus) Comfortable working with various data formats and large-scale datasets Experience with Azure DevOps, including pipeline configuration and automation Familiarity with Pytest or equivalent test frameworks Strong communication skills and a team-first attitude. Experience with Databricks Familiarity with AWS services Working knowledge of Jenkins and advanced ADO Pipelines Key Responsibilities Design, build, and maintain pipelines in Python to collect data from a wide range of sources (APIs, SFTP servers, websites, emails, PDFs, etc.) Deploy and orchestrate workflows using Apache Airflow Perform web scraping using libraries like requests, BeautifulSoup, Selenium Handle structured, semi-structured, and unstructured data efficiently Transform datasets using pandas and/or polars Write unit and component tests using pytest Collaborate with platform teams to improve the data scraping framework Query and analyze data using SQL (PostgreSQL, MSSQL, Databricks) Conduct code reviews, support best practices, and improve coding standards across the team Manage and maintain CI/CD pipelines (Azure DevOps Pipelines, Jenkins) Tech stack: Main/essential: Python - Pandas and/or Polars - Essential SQL Azure DevOps Airflow Additional: Databricks AWS Jenkins ADO Pipelines Skills: DevOps Pandas Python
Posted 4 days ago
2.0 - 4.0 years
2 - 4 Lacs
himatnagar
Work from Office
Responsibilities: * Design, develop & maintain web scrapers using Scrapy & Python * Collaborate with cross-functional teams on project requirements * Ensure data accuracy & compliance with industry standards Work from home
Posted 4 days ago
7.0 - 12.0 years
25 - 35 Lacs
gurugram, greater noida, delhi / ncr
Hybrid
A Python Lead with expertise in Django and AWS holds a pivotal role in the development and deployment of web applications, encompassing both technical leadership and hands-on contributions. Required Candidate profile Python Development and Implementation: Architecture and Design: Cloud Infrastructure Management: Team Management and Collaboration:
Posted 5 days ago
7.0 - 12.0 years
25 - 35 Lacs
hyderabad, chennai, bengaluru
Hybrid
A Python Lead with expertise in Django and AWS holds a pivotal role in the development and deployment of web applications, encompassing both technical leadership and hands-on contributions. Required Candidate profile Python Development and Implementation: Architecture and Design: Cloud Infrastructure Management: Team Management and Collaboration:
Posted 5 days ago
3.0 - 6.0 years
10 - 14 Lacs
coimbatore
Remote
Job Title: Web Scraping Specialist Experience: 3 - 6 Years Location: Remote (Work from Home) About the job We are seeking a highly skilled Web Scraping Specialist to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis, and other business intelligence purposes. Responsibilities: Scalability/Performance: Lead and provide expertise in scraping at scale e-commerce marketplaces. Data Source Identification: Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives Web Scraping Design: Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task Data Extraction: Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure Data Cleansing and Validation: Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process Monitoring and Maintenance: Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms Scalability and Performance: Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources Compliance and Legal Considerations: Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations Documentation: Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow Collaboration: Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively Security: Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping process Requirements: Proven experience of 3+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects Expertise in handling dynamic content, user-agent rotation, bypass CAPTCHAs, rate limits, and utilization of proxy services Knowledge on browser fingerprinting Has leadership experience Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and Coding Knowledge and experience in best of class data storage and retrieval of large volume of scraped data. Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management Attention to detail and the ability to handle and process large volumes of data accurately Familiarity with data cleansing techniques and data validation processes Good communication skills and the ability to collaborate effectively with cross-functional teams Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service Strong problem-solving skills and the ability to adapt to changing web environments Preferred Qualifications: Bachelor's degree in Computer Science, Data Science, Information Technology, or related fields Experience with cloud-based solutions and distributed web scraping systems Familiarity with APIs and data extraction from non-public sources Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory Prior experience in handling large-scale data projects and working with big data frameworks Understanding of various data formats such as JSON, XML, CSV, etc Experience with version control systems like Git
Posted 5 days ago
6.0 - 15.0 years
0 Lacs
karnataka
On-site
The role involves designing and developing scalable BI and Data Warehouse (DWH) solutions, leveraging tools like Power BI, Tableau, and Azure Databricks. Responsibilities include overseeing ETL processes using SSIS, creating efficient data models, and writing complex SQL queries for data transformation. You will design interactive dashboards and reports, working closely with stakeholders to translate requirements into actionable insights. The role requires expertise in performance optimization, data quality, and governance. It includes mentoring junior developers and leveraging Python for data analysis (Pandas, NumPy, PySpark) and scripting ETL workflows with tools like Airflow. Experience with cloud platforms (AWS S3, Azure SDK) and managing databases such as Snowflake, Postgres, Redshift, and MongoDB is essential. Qualifications include 6-15+ years of BI architecture and development experience, a strong background in ETL (SSIS), advanced SQL skills, and familiarity with the CRISP-DM model. You should also possess skills in web scraping, REST API interaction, and data serialization (JSON, CSV, Parquet). Strong programming foundations with Python and experience in version control for code quality and collaboration are required for managing end-to-end BI projects.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will be responsible for developing the Policies & Next Gen Rule Engine for Digital Lending. This includes working on various RestFul APIs and integrating them with different data sources. Your role will also involve developing various reports and automating tasks necessary for the business operations. The ideal candidate should possess good programming skills using Python and DB. Experience in Numpy and Pandas is highly preferred. Knowledge of AWS or other Cloud Platforms would be advantageous for this role. Additionally, familiarity with Web Scraping or Crawling would also be a plus. In this position, strong logical and analytical skills are essential. Good soft skills and communication abilities will also be beneficial for effective collaboration within the team and across departments.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You will be responsible for designing, developing, and maintaining web scraping scripts using Python. Your expertise will be essential in utilizing web scraping libraries like Beautiful Soup, Scrapy, Selenium, and other tools to extract data from various websites. It will be crucial to write reusable, testable, and efficient code to extract both structured and unstructured data effectively. Additionally, you will play a key role in developing and maintaining software documentation for the web scraping scripts you create. Collaboration with software developers, data scientists, and other stakeholders will be necessary to strategize, design, develop, and launch new web scraping projects. Troubleshooting, debugging, and optimizing web scraping scripts will also be part of your responsibilities. Staying informed about the latest industry trends and technologies in automated data collection and cleaning will be expected. Your involvement in maintaining code quality, organizing projects, participating in code reviews, and ensuring solutions align with standards will be crucial for success in this role. Creating automated test cases to validate the functionality and performance of the code will also be part of your duties. Integration of data storage solutions such as SQL/NoSQL databases, message brokers, and data streams for storing and analyzing scraped data will be another aspect of your work. A bachelor's degree in Computer Science, Software Engineering, or a related field from a reputable institution is required, along with a minimum of 3-4 years of deep coding experience in Python. Experience with Python development and web scraping techniques is essential for this position. Familiarity with web frameworks like Django and Flask, as well as technologies such as SQL, Git, and Linux, is also necessary. Strong analytical and problem-solving skills, along with effective communication and teamwork abilities, will be key attributes for success in this role. This position is located in Gurgaon, and the work model is Hybrid. If you possess the required skills and experience, we encourage you to apply for this job or refer someone who would be a good fit for this role. Stay updated with the latest industry trends and technologies to excel in this dynamic work environment.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
You are a Data Analyst with 1-2 years of experience, skilled in data visualization, data analysis, and web scraping. Your primary role involves collecting, processing, and analyzing large datasets from various sources to provide meaningful insights for business decisions. Your responsibilities include end-to-end data analysis and reporting, creating interactive dashboards using tools like Power BI and Tableau, and conducting data scraping using Python tools such as BeautifulSoup and Selenium. You will also be responsible for data cleaning, transformation, and validation using Excel, SQL, or Python libraries like pandas and numpy. Collaboration with cross-functional teams to understand data requirements, ensuring data quality and security in scraping workflows, and providing ad hoc data reports are key aspects of your role. Proficiency in tools like MySQL, PostgreSQL, and Google Sheets is beneficial, along with a strong analytical mindset and experience in Agile environments. Qualifications include a Bachelor's degree in Computer Science or related field, 1-2 years of data analytics experience, and knowledge of data integrity and compliance. Preferred attributes include the ability to handle large datasets efficiently, familiarity with cloud data platforms like AWS, and experience in a day shift work environment. As a Full-time Data Analyst, you will enjoy benefits such as Provident Fund and work in person at the Hyderabad location.,
Posted 1 week ago
1.0 - 3.0 years
3 - 8 Lacs
bengaluru
Hybrid
About the Role: Grade Level (for internal use): 08 Job Title: Associate Data Engineer The Team: The Automotive Insights - Supply Chain and Technology and IMR department at S&P Global is dedicated to delivering critical intelligence and comprehensive analysis of the automotive industry's supply chain and technology. Our team provides actionable insights and data-driven solutions that empower clients to navigate the complexities of the automotive ecosystem, from manufacturing and logistics to technological innovations and market dynamics. We collaborate closely with industry stakeholders to ensure our research supports strategic decision-making and drives growth within the automotive sector. Join us to be at the forefront of transforming the automotive landscape with cutting-edge insights and expertise. Responsibilities and Impact: Develop and maintain automated data pipelines to extract, transform, and load data from diverse online sources, ensuring high data quality. Build, optimize, and document web scraping tools using Python and related libraries to support ongoing research and analytics. Implement DevOps practices for deploying, monitoring, and maintaining machine learning workflows in production environments. Collaborate with data scientists and analysts to deliver reliable, well-structured data for analytics and modeling. Perform data quality checks, troubleshoot pipeline issues, and ensure alignment with internal taxonomies and standards. Stay current with advancements in data engineering, DevOps, and web scraping technologies, contributing to team knowledge and best practices. What Were Looking For: Basic Required Qualifications: Bachelors degree in computer science, Engineering, or a related field. 1 to 3 years of hands-on experience in data engineering, including web scraping and ETL pipeline development using Python. Proficiency with Python programming and libraries such as Pandas, BeautifulSoup, Selenium, or Scrapy. Exposure to implementing and maintaining DevOps workflows, including model deployment and monitoring. Familiarity with containerization technologies (e.g., Docker) and CI/CD pipelines for data and ML workflows. Familiarity with the cloud platforms (preferably AWS). Key Soft Skills: Strong analytical and problem-solving skills, with attention to detail. Excellent communication and collaboration abilities for effective teamwork. Ability to work independently and manage multiple priorities. Curiosity and a proactive approach to learning and applying new technologies.
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
delhi
On-site
The selected intern will be responsible for developing and maintaining web applications using Python frameworks like Django or Flask. They will also be involved in web scraping projects to collect and process data from various sources. Utilizing MongoDB for efficient data storage, retrieval, and management will be another key aspect of the role. Additionally, the intern will be tasked with implementing and overseeing cron jobs-celery for scheduled tasks and automation, as well as providing support for cloud infrastructure setup and maintenance on AWS. Collaborating with the development team to create, test, and refine software solutions, debugging and resolving technical issues, and documenting code and processes for future reference are essential duties. Keeping abreast of the latest industry trends and technologies is also crucial. Lifease Solutions LLP is a company that values the fusion of design and technology to solve problems and bring ideas to fruition. As a prominent provider of software solutions and services, the company is dedicated to helping businesses thrive. Headquartered in Noida, India, Lifease Solutions is focused on delivering top-notch, innovative solutions that drive value and growth for its clients. With specialization in the finance, sports, and capital market domains, the company has established itself as a reliable partner for organizations worldwide. The team takes pride in transforming small projects into significant successes and continually seeks ways to assist clients in maximizing their IT investments.,
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
haryana
On-site
As an intern at Experiences.digital, your day-to-day responsibilities will involve assisting in the integration of Google Gemini/ChatGPT into our existing e-commerce system. You will be tasked with creating APIs for this integration and handling web scraping for multiple functionalities, followed by integration with our system through APIs. Experiences.digital is a technology and innovative communication company that has a successful track record of creating, launching, and managing various websites, mobile applications, and e-commerce projects. Our client base is diverse, ranging from luxury retailers to manufacturing companies. With expertise in cloud networking, digital marketing, web and app development, branding, and creative services, we offer a wealth of cross-category experience to our clients right up to the last mile.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be part of Xwiz Analytics, a company specializing in providing innovative data-driven solutions focusing on e-commerce, social media, and market analytics. As a Python Developer with expertise in web scraping, you will play a crucial role in developing robust data extraction pipelines. Your responsibilities will include creating and optimizing scalable web scraping solutions, building scraping scripts using Python libraries like Scrapy, BeautifulSoup, Selenium, and Playwright, managing scraping bots with advanced automation capabilities, and handling anti-scraping mechanisms effectively. Moreover, you will be responsible for ensuring data accuracy and quality through data validation and parsing rules, integrating web scraping pipelines with databases and APIs for seamless data storage and retrieval, debugging and troubleshooting scraping scripts, and implementing robust error-handling mechanisms to enhance the reliability of scraping systems. Collaboration with the team to define project requirements, deliverables, and timelines will also be a key aspect of your role. To excel in this position, you must have a strong proficiency in Python, particularly in web scraping libraries, experience in parsing structured and unstructured data formats, hands-on experience with anti-bot techniques, a solid understanding of HTTP protocols, cookies, and headers management, experience with multi-threading or asynchronous programming, familiarity with databases, knowledge of API integration, cloud platforms, and strong debugging skills. Furthermore, it would be advantageous to have experience with Docker, task automation tools, machine learning techniques, and a good understanding of ethical scraping practices and compliance with legal frameworks. Key attributes that will contribute to your success in this role include strong analytical and problem-solving skills, attention to detail, the ability to thrive in a fast-paced, collaborative environment, and a proactive approach to challenges. In return, Xwiz Analytics offers you the opportunity to work on exciting projects with large-scale data challenges, a competitive salary and benefits package, a supportive environment for professional growth, and a flexible work culture that promotes innovation and learning.,
Posted 1 week ago
3.0 - 8.0 years
2 - 18 Lacs
jalandhar
Work from Office
Highly skilled security researcher / reverse engineer expertise in fast Python development . Skill to analyze platforms, identify fresh access points, and implement ultra-fast scrapers that feed our real-time WebSocket/raw pipeline, proxy Experience Annual bonus
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
warangal, telangana
On-site
You will be responsible for building, improving, and extending NLP capabilities. Your tasks will include selecting proper annotated datasets for supervised learning techniques, using effective text representation techniques to develop useful features, identifying and utilizing correct algorithms for specific NLP projects, developing NLP projects as per prescribed requirements, training developed NLP models, and evaluating their effectiveness. Additionally, you will conduct statistical analyses of models, adjust models where necessary, and extend machine learning frameworks and libraries for NLP projects. The ideal candidate should have 2-4 years of experience in Analytics and Machine Learning & Deep Learning, including expertise in areas such as Sentiment Analysis, Text Mining, Entity Extraction, Document Classification, Topic Modeling, Natural Language Understanding (NLU), and Natural Language Generation (NLG). You are expected to possess good experience in Web Scraping or data extraction through Selenium/Scrapy or other frameworks and related libraries like BeautifulSoup. Knowledge of advanced scraping techniques such as overcoming captcha, proxy, browser fingerprinting, and bot detection bypassing is preferred. You should have a working knowledge of various databases like MySQL, HBase, MongoDB, message queues, and Web RestFul APIs. Expertise in open-source NLP toolkits such as CoreNLP, OpenNLP, NLTK, Gensim, LingPipe, Keras, TensorFlow, Mallet, ML/math toolkits like scikit-learn, MLlib, Theano, NumPy, etc., is essential. Experience in testing and deployment of machine learning/deep learning projects on the desired platform is required. Strong logical and analytical skills along with good soft and communication skills are desired qualities. Desirable certifications for this role include Google Tensor Flow, Hadoop, Python, PySpark, SQL, and R.,
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
delhi
On-site
As an intern at Primetrade.ai, you will be responsible for developing Python programs for web scraping, data collection, and preprocessing from various web sources. Additionally, you will assist in designing and implementing trading algorithms using APIs like Binance, as well as conduct data analysis to identify trading patterns and trends from blockchain platforms. You will have the opportunity to collaborate on creating algorithms for trading, integrating machine learning and statistical models. Moreover, you will be tasked with automating data collection, processing, and trading operations, as well as extracting and analyzing blockchain data to monitor trader behavior and performance. Furthermore, you will play a key role in generating insightful reports and visualizations on performance metrics. Primetrade.ai is a niche AI and blockchain venture studio that supports multiple product startups in the cutting-edge fields of AI and blockchain.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Senior Lead, you will be involved in Python-based automation initiatives encompassing AI/ML model testing and Framework development. Your responsibilities will include designing Frameworks, developing Proof of Concepts, Accelerators, and executing tasks effectively and independently. You will also be designing and implementing automation for Data, Web, and Mobile applications, as well as participating in Demos, Planning, and collaborating with various teams including business consultants, data scientists, engineers, and application developers. Desired Skills and Experience: - Strong knowledge and experience in Python Programming Language. - Basic understanding of AI/ML and Data science models along with related libraries such as scikit-learn, matplotlib, etc. - Ability to autonomously design, develop, and architect highly available, highly scalable accelerators and frameworks from scratch. - Hands-on experience with Pytest, Selenium, Pyspark, Numpy, Pandas. - Proficiency in SQL and database Knowledge. - Solid grasp of CI/CD pipeline tools like Azure/Jenkins. - Good understanding of RESTful interfaces and Microservice concepts. - Extensive Experience in building frameworks using Appium, Selenium, Pytest, Requests, and related Libraries. - Sound knowledge of Web Scraping utilizing Python-based tools like Scrapy, and BeautifulSoup. - Familiarity with Dockers and Kubernetes is advantageous. - Knowledge of cloud platforms AWS/Azure/GCP. - Ability to rapidly grasp and apply complex technical information to testing scenarios. - Attention to detail and adeptness in escalating and managing issues effectively. - Familiarity with Agile methodology. - Capability to manage multiple assignments simultaneously and adhere to delivery timelines. - Knowledge of Rally/JIRA is a plus. - Excellent written and verbal communication skills. Join us in shaping the world's premier AI and advanced analytics team where equal opportunities are valued. Our competitive compensation packages ensure you are rewarded based on your expertise and experience.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a global leader in assurance, tax, transaction, and advisory services, EY is dedicated to hiring and developing passionate individuals to contribute to building a better working world. Cultivating a culture that values training, opportunities, and creative freedom, EY focuses on nurturing your potential to help you become the best version of your professional self. The limitless potential at EY ensures motivating and fulfilling experiences throughout your career journey. The position available is for Senior Consultant-National-Assurance-ASU in Chennai within the Audit - Standards and Methodologies team. The purpose of Assurance at EY is to inspire confidence and trust in a complex world by protecting the public interest, promoting transparency, supporting investor confidence and economic growth, and fostering talent to nurture future business leaders. Key Responsibilities: - Technical Excellence - Audit Analytics - Visualization - Data extraction from Client ERPs - Risk-based analytics - Proficiency in visualization tools like Tableau, Spotfire, Qlikview - Machine Learning using R or Python with a strong statistical background - Proficiency in MS Office Suite, including advanced Excel skills & Macros - Experience in NLP/ Web Scraping/ Log Analytics/ TensorFlow/ AI / Beautiful Soup Skills and attributes required: - Qualification: BE/ B.Tech, or MSC in Computer Science/Statistics or M.C.A - Experience: 5 - 7 years of relevant experience - Strong collaborative skills and ability to work across multiple client departments - Practical approach to problem-solving and delivering insightful solutions - Agility, curiosity, mindfulness, positive energy, adaptability, and creativity What EY offers: EY, with its global presence and strong brand, provides an inclusive environment where employees can collaborate with market-leading entrepreneurs, game-changers, and visionaries. EY invests significantly in skills and learning for its employees, offering personalized Career Journeys and access to career frameworks to enhance roles, skills, and opportunities. EY is committed to being an inclusive employer that focuses on achieving a balance between delivering excellent client service and supporting employee career growth and wellbeing. If you meet the above criteria and are ready to contribute to building a better working world, contact us to explore opportunities with EY. Apply now to join us on this exciting journey.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |