Jobs
Interviews

319 Data Ingestion Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 12.0 years

19 - 25 Lacs

Chennai

Remote

Responsibilities : - Lead and implement end-to-end technical solutions within D365 Customer Insights (Data and Journeys) to meet diverse client requirements, from initial design to deployment and support. - Design and configure CI data unification processes, including data ingestion pipelines, matching and merging rules, and segmentation models to create comprehensive and actionable customer profiles. - Demonstrate deep expertise in data quality management and ensuring data integrity. - Proficiency in integrating data with CI-Data using various methods, including standard connectors, API calls, and custom ETL pipelines (Azure Data Factory, SSIS). - Experience with different data sources and formats. - Hands-on experience with the Power Platform, including Power Automate (flows for data integration and automation), Power Apps (for custom interfaces and extensions), and Dataverse (data modeling and storage). - Strong skills in JavaScript, Power Fx, or other scripting languages relevant to CI customization and plugin development. - Ability to develop, test, and deploy custom functionalities, workflows, and plugins to enhance CI capabilities. - Proven experience in customer journey mapping, marketing automation, and campaign management using CI-Journeys. - Ability to design and implement personalized customer journeys based on data insights and business objectives. - Proactively troubleshoot and resolve technical issues within CI-Data and CI-Journeys environments, focusing on data integrity, performance optimization, and system stability. - Conduct root cause analysis and implement effective solutions. - Strong analytical and problem-solving skills, with experience leveraging CI data analytics to generate actionable business insights and recommendations. - Ability to translate data into compelling narratives and visualizations. - Excellent written and verbal communication skills for effectively liaising with stakeholders, clients, and internal teams. - Ability to clearly articulate technical concepts to both technical and non-technical audiences. - Provide technical mentorship and guidance to junior team members. - Contribute to knowledge sharing and best practices within the team. - Stay up-to-date with the latest D365 Customer Insights features, updates, and best practices. - Proactively seek opportunities to expand your technical knowledge and skills. Required Skills & Experience : - 9+ years of experience working with D365 Customer Insights (Data and Journeys), with a strong focus on technical implementation and configuration. - In-depth understanding of data unification, segmentation, and profile creation within CI-Data. - Proficiency in data integration with CI-Data using connectors, API calls, and custom ETL pipelines. - Hands-on experience with Power Platform tools (Power Automate, Power Apps, Dataverse). - Strong skills in JavaScript, Power Fx, or other scripting languages used for CI customization and plugin development. - Proven experience in customer journey mapping, marketing automation, and campaign management within CI-Journeys. - Deep understanding of marketing and customer experience principles and their application within CI. - Strong analytical and problem-solving skills, with experience using CI data analytics to drive business insights. - Excellent written and verbal communication skills.

Posted 1 month ago

Apply

10.0 - 12.0 years

32 - 37 Lacs

Kolkata

Remote

Responsibilities : - Lead and implement end-to-end technical solutions within D365 Customer Insights (Data and Journeys) to meet diverse client requirements, from initial design to deployment and support. - Design and configure CI data unification processes, including data ingestion pipelines, matching and merging rules, and segmentation models to create comprehensive and actionable customer profiles. - Demonstrate deep expertise in data quality management and ensuring data integrity. - Proficiency in integrating data with CI-Data using various methods, including standard connectors, API calls, and custom ETL pipelines (Azure Data Factory, SSIS). - Experience with different data sources and formats. - Hands-on experience with the Power Platform, including Power Automate (flows for data integration and automation), Power Apps (for custom interfaces and extensions), and Dataverse (data modeling and storage). - Strong skills in JavaScript, Power Fx, or other scripting languages relevant to CI customization and plugin development. - Ability to develop, test, and deploy custom functionalities, workflows, and plugins to enhance CI capabilities. - Proven experience in customer journey mapping, marketing automation, and campaign management using CI-Journeys. - Ability to design and implement personalized customer journeys based on data insights and business objectives. - Proactively troubleshoot and resolve technical issues within CI-Data and CI-Journeys environments, focusing on data integrity, performance optimization, and system stability. - Conduct root cause analysis and implement effective solutions. - Strong analytical and problem-solving skills, with experience leveraging CI data analytics to generate actionable business insights and recommendations. - Ability to translate data into compelling narratives and visualizations. - Excellent written and verbal communication skills for effectively liaising with stakeholders, clients, and internal teams. - Ability to clearly articulate technical concepts to both technical and non-technical audiences. - Provide technical mentorship and guidance to junior team members. - Contribute to knowledge sharing and best practices within the team. - Stay up-to-date with the latest D365 Customer Insights features, updates, and best practices. - Proactively seek opportunities to expand your technical knowledge and skills. Required Skills & Experience : - 9+ years of experience working with D365 Customer Insights (Data and Journeys), with a strong focus on technical implementation and configuration. - In-depth understanding of data unification, segmentation, and profile creation within CI-Data. - Proficiency in data integration with CI-Data using connectors, API calls, and custom ETL pipelines. - Hands-on experience with Power Platform tools (Power Automate, Power Apps, Dataverse). - Strong skills in JavaScript, Power Fx, or other scripting languages used for CI customization and plugin development. - Proven experience in customer journey mapping, marketing automation, and campaign management within CI-Journeys. - Deep understanding of marketing and customer experience principles and their application within CI. - Strong analytical and problem-solving skills, with experience using CI data analytics to drive business insights. - Excellent written and verbal communication skills.

Posted 1 month ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Mumbai

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 1 month ago

Apply

8.0 - 10.0 years

12 - 16 Lacs

Noida

Work from Office

Your Role & Responsibilities : - You will own a part of the Microsoft Ecosystem solution architecture (HCLTech Microsoft Industry Cloud) technology stack and portfolio to ensure alignment with both HCLTech and Microsoft sales and strategy priorities. - Continually review, understand, and analyze Microsoft strategy and technology roadmap and ensure that this is contextualized and communicated, as a joint PoV, to the HCLTech key stakeholders through reports, documents, and roadmaps. - Support proactive and reactive sales motions including active participation solution response creation and in customer and advisor group conversations. - Maintain messaging, capability packs, solution, and creative briefs for in-scope sections of the overall HCLTech Microsoft Industry Cloud technology stack. - Capabilities should be considered across the Design, Sell, Implement, Delivery and Govern, and tailors to appropriate stakeholder group. - Based on awareness of appropriate strategies, industry and market trends and customer needs ensure that HCLTech Product and Service owners are creating products and offerings that correctly exploit the solution stack. - Ensure the HCLTech standard services are validated by MSFT and are published in the MSFT sales services catalogue and that MSFT field sales are incentivized to sell through the indirect channel. - Track relevant Microsoft funding programs, enabling services and ensuring that HCLTech generates revenue through this channel to fund strategic programs. - Work collaboratively with Product and Services owners and other stakeholders to ensure product revision and launch plans are in place and executed perfectly to create market momentum. - As needed to ensure plans are suitably amplified within Microsoft stakeholder groups. - Working with teams within HCLTech, Microsoft and with third parties to making sure that all stakeholders understand the differentiated value of the solution stack and how HCLTech is uniquely differentiated to drive digital transformation across the Microsoft cloud. - As the point of connection between both companies the GTM Manager will manage relationships with a variety of senior stakeholders with sales, commercial and technical backgrounds. - This will involve solving issues, address escalations and working proactively to ensure that services can be sold, implemented, and supported throughout the entire lifecycle. - Strategies and services offerings are developed through many channels, with multiple partners, across multiple HCLTech lines of Business and with numerous external stakeholders. - A strong Program Management approach is necessary to track, report and ensure timely completion of all activities. Qualifications & Experience : Minimum Requirements : - While deep technical skills are not required for this role, candidates should have a minimum of 8-10 years' experience in a technical role with a GSI. - Successful candidates must be able to apply their experience to solve business issues, create excited customers and build commercially viable products. - The Cloud business is continually changing so continual learning and ability to research new topics is critical for success in this role. - Experience of Product or Category Management, including ownership of the entire lifecycle and financial reporting is a key aspect of the role and candidate should be able to demonstrate development of a product portfolio and market share. - The candidate should have experience in similar roles, ideally within a company of similar scale and scope to HCLTech. - Experience of one or more of the following industry verticals. - Manufacturing, including Industry 4, IoT enabled connected services. - Lifescience and Healthcare, including connected and smart platforms and medtech. - Experience of one or more of the following horizontal workloads. - Data and Analytics, included data ingestion, storage, governance, and AI (Microsoft IDP/Fabric and OpenAI). - Cybersecurity, including managed services, edge through core to device (Sentinel, Defender). - Digital Workplace including Teams, Viva and front-line work services. Necessary Skills : - Celebrates team first and individual successes and creates a safe environment for constructive feedback. - Ability to work flexibly among quickly changing priorities while consistently delivering to tight deadlines. - High levels of influencing skills including professional, effective, and persuasive oral and written communications to all audience levels, technical and non-technical, including executives. - Ability to be collaborative and collegial and possess the confidence to make tough decisions. - Anticipates problems and sees how a problem and its solution will affect other projects, people, or processes. - Should not only be able to identify and document dependencies, conflicts, roadblocks, and issues but solve them focusing on agreed deadlines. - Must be able to set own priorities, follow-through, and work towards agreed targets with minimum of supervision. - Technology is rapidly changing and evolving, and successful candidates must take responsibility for the continual development of their own skills. - Strong executive presence including written communication, executive reporting, and presentation skills with a high degree of comfort to large and small technical audiences. - Inclusive and collaborative - driving teamwork and cross-team alignment and building relationships and networks across the various ecosystems to leverage ideas and best practice. - Methodical approach to detail is important. - Should be able to identify sources of data and analyze information from multiple sources including technology trends, buyer behaviors and financial performance to create actionable and positive recommendations.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 18 Lacs

Hyderabad

Hybrid

About the Role: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4-6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Mumbai, Maharashtra, India

On-site

We're looking for an experienced professional to join our team in Hyderabad, accountable for the setup and maintenance of External Data Streams within assigned clinical trials. You'll ensure these streams adhere to best practices and defined guidelines, contributing to the integrity and efficiency of our clinical data. Key Responsibilities Accountable for the setup and maintenance of External Data Streams within assigned trial(s) according to best practices and defined guidelines. External Data Streams include, but are not limited to, ePRO, eSource, EHR, Real World data , and traditional and novel clinical data streams (e.g., Labs, ECG, Biomarkers, PK/PD, PGx, IVRS). Your activities and deliverables will include, but are not limited to: Development of trial-specific data transfer agreements and specifications . Verification of data transfers . Setup of automated data ingestion into the clinical data repository. Principal Relationships Reports to : A people manager position within the functional area (e.g., Data Acquisition Leader). Functional Contacts within IDAR (Internal) : Leaders and/or leads in Data Management and Central Monitoring, Clinical and Statistical Programming, Clinical Data Standards, Regulatory Medical Writing, IDAR Therapeutic Area Lead, and system support organizations. Functional Contacts within JJ Innovative Medicine (as collaborator or peer) : Global Program Leaders, Global Trial Leaders, Biostatisticians, Clinical Teams, Procurement, Finance, Legal, Global Privacy, Regulatory, Strategic Partnerships, Human Resources, and Project Coordinators. External Contacts : External partners and suppliers, CRO management and vendor liaisons, industry peers, and working groups. Education and Experience Requirements Required Bachelor's degree (e.g., BS, BA) or equivalent professional experience is required, preferably in Clinical Data Management, Health, or Computer Sciences. Advanced degrees (e.g., Master, PhD) are preferred. Approximately 5+ years of experience in the Pharmaceutical, CRO, or Biotech industry or a related field. Proven knowledge of data management practices (including tools and processes). Proven knowledge of regulatory guidelines (e.g., ICH-GCP) and standards (e.g., CDASH, SDTM). Intermediate project and risk management skills with an established track record of delivering successful outcomes. Established track record collaborating with multi-functional teams in a matrix environment and partnering with/managing stakeholders, customers, and vendors. Strong communication, leadership, influencing, and decision-making skills. Strong written and verbal communication skills (in English). Demonstrated technical expertise developing and maintaining External Data Streams (e.g., Labs, ECG, Biomarkers, PK/PD, PGx, IVRS) and associated components (e.g., Data Transfer Agreements, Specifications, transfer file verification, data ingestion set-up). Preferred Innovative thinking for optimal design and execution of clinical development strategies. Ability to contribute to the development and implementation of business change or innovative ways of working. Experience working with data from EHR/EMR, Digital Health technologies, Real-World Data, or similar, eDC systems, eDC integration tools, and general data capture platforms .

Posted 1 month ago

Apply

12.0 - 16.0 years

1 - 1 Lacs

Hyderabad

Remote

Were Hiring: Azure Data Factory (ADF) Developer Hyderabad Location: Onsite at Canopy One Office, Hyderabad/Remote Type: Full-time/Partime/Contract | Offshore role | Must be available to work in Eastern Time Zone (EST) We’re looking for an experienced ADF Developer to join our offshore team supporting a major client. This role focuses on building robust data pipelines using Azure Data Factory (ADF) and working closely with client stakeholders for transformation logic and data movement. Key Responsibilities Design, build, and manage ADF data pipelines Implement transformations and aggregations based on mappings provided Work with data from the bronze (staging) area, pre-loaded via Boomi Collaborate with client-side data managers (based in EST) to deliver clean, reliable datasets Requirements Proven hands-on experience with Azure Data Factory Strong understanding of ETL workflows and data transformation Familiarity with data staging/bronze layer concepts Willingness to work in Eastern Time Zone (EST) hours Preferred Qualifications Knowledge of Kimball Data Warehousing (huge advantage!) Experience working in an offshore coordination model Exposure to Boomi is a plus Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

4.0 - 5.0 years

10 - 11 Lacs

Mumbai

Remote

Location: Remote / Pan India iSource Services is hiring for one of their client for the position of Adobe CDP Consultant (Execution Specialist). About the role: We are hiring an Adobe CDP Consultant (Execution Specialist) to support hands-on implementation of Adobe Real-Time CDP. Working under a Senior Consultant, you will manage technical setup, data flows, segmentation, and activation processes. The ideal candidate has strong execution experience in CDP environments and a solid grasp of data integration and marketing tech ecosystems. Roles & Responsibilities: Assist in the technical setup, configuration, and deployment of Adobe Real-Time CDP. Support data ingestion, transformation, identity resolution, and activation workflows. Implement and execute audience segmentation strategies based on strategic direction. Ensure seamless integration between Adobe Real-Time CDP and downstream marketing platforms. Troubleshoot data pipeline issues and optimize data flow performance. Build dashboards and reports to monitor campaign performance, data quality, and audience insights. Collaborate with cross-functional teams including marketing, analytics, and IT to deliver on marketing use cases. Stay current with Adobe Experience Cloud features and enhancements to drive innovation and continuous improvement. Required Skills: 4-5 years of CDP execution experience Hands-on with Adobe Real-Time CDP or similar platforms Proficient in data modeling, identity resolution, and segmentation Knowledge of APIs, data workflows, and troubleshooting Familiarity with Adobe Analytics, Target, and automation tools Adobe certifications are a plus

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Chennai

Work from Office

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 3+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Chennai

Work from Office

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 4+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organisations seeking independent talent.. Flexing It has partnered with Our client, a global leader in energy management and automation, is seeking a Data engineer to prepare data and make it available in an efficient and optimized format for their different data consumers, ranging from BI and analytics to data science applications. It requires to work with current technologies in particular Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on AWS environment.. Key Responsibilities:. Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or for building new data objects. Working in ingestion patterns allow to automate the data pipelines.. Participate to and apply DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. This can include the design and implementation of end-to-end data integration tests and/or CICD pipelines.. Analyze existing data models, identify and implement performance optimizations for data. ingestion and data consumption. The objective is to accelerate data availability within the. platform and to consumer applications.. Support client applications in connecting and consuming data from the platform, and ensure they follow our guidelines and best practices.. Participate in the monitoring of the platform and debugging of detected issues and bugs. Skills required:. Minimum of 3 years prior experience as data engineer with proven experience on Big Data and Data Lakes on a cloud environment.. Bachelor or Master degree in computer science or applied mathematics (or equivalent). Proven experience working with data pipelines / ETL / BI regardless of the technology.. Proven experience working with AWS including at least 3 of: RedShift, S3, EMR, Cloud. Formation, DynamoDB, RDS, lambda.. Big Data technologies and distributed systems: one of Spark, Presto or Hive.. Python language: scripting and object oriented.. Fluency in SQL for data warehousing (RedShift in particular is a plus).. Good understanding on data warehousing and Data modelling concepts. Familiar with GIT, Linux, CI/CD pipelines is a plus.. Strong systems/process orientation with demonstrated analytical thinking, organization. skills and problem-solving skills.. Ability to self-manage, prioritize and execute tasks in a demanding environment.. Strong consultancy orientation and experience, with the ability to form collaborative,. productive working relationships across diverse teams and cultures is a must.. Willingness and ability to train and teach others.. Ability to facilitate meetings and follow up with resulting action items. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Noida

Work from Office

Join our Team. About This Opportunity. Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments.. What You Will Do. Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI.. Utilize Python for data manipulation, analysis, and modeling tasks.. Proficient in SQL for querying and analyzing large datasets.. Experience with Docker and Kubernetes for containerization and orchestration of applications.. Basic knowledge of PySpark for distributed computing and data processing.. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions.. Deploy machine learning models into production environments and ensure scalability and reliability.. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment.. Experience in analysing complex problems and translate it into algorithms.. Backend development in Rest APIs using Flask, Fast API. Deployment experience with CI/CD pipelines. Working knowledge of handling data sets and data pre-processing through PySpark. Writing queries to target Casandra, PostgreSQL database.. Design Principles in application development.. The Skills You Bring. Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred.. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry.. Proven experience in model development, evaluation, and deployment.. Strong programming skills in Python and SQL.. Familiarity with Docker, Kubernetes, and PySpark.. Solid understanding of machine learning techniques and algorithms.. Experience working with cloud platforms, preferably GCP.. Excellent problem-solving skills and ability to work independently as well as part of a team.. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders.. Why join Ericsson?. At Ericsson, youll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of whats possible. To build solutions never seen before to some of the world’s toughest problems. Youll be challenged, but you won’t be alone. Youll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next.. What happens once you apply?. Click Here to find all you need to know about what our typical hiring process looks like.. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more.. Primary country and city: India (IN) || Bangalore. Req ID: 763993. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Kolkata

Work from Office

Join our Team. About This Opportunity. Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments.. What You Will Do. Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI.. Utilize Python for data manipulation, analysis, and modeling tasks.. Proficient in SQL for querying and analyzing large datasets.. Experience with Docker and Kubernetes for containerization and orchestration of applications.. Basic knowledge of PySpark for distributed computing and data processing.. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions.. Deploy machine learning models into production environments and ensure scalability and reliability.. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment.. Experience in analysing complex problems and translate it into algorithms.. Backend development in Rest APIs using Flask, Fast API. Deployment experience with CI/CD pipelines. Working knowledge of handling data sets and data pre-processing through PySpark. Writing queries to target Casandra, PostgreSQL database.. Design Principles in application development.. The Skills You Bring. Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred.. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry.. Proven experience in model development, evaluation, and deployment.. Strong programming skills in Python and SQL.. Familiarity with Docker, Kubernetes, and PySpark.. Solid understanding of machine learning techniques and algorithms.. Experience working with cloud platforms, preferably GCP.. Excellent problem-solving skills and ability to work independently as well as part of a team.. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders.. Why join Ericsson?. At Ericsson, youll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of whats possible. To build solutions never seen before to some of the world’s toughest problems. Youll be challenged, but you won’t be alone. Youll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next.. What happens once you apply?. Click Here to find all you need to know about what our typical hiring process looks like.. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more.. Primary country and city: India (IN) || Bangalore. Req ID: 763993. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Join our Team. About This Opportunity. Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments.. What You Will Do. Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI.. Utilize Python for data manipulation, analysis, and modeling tasks.. Proficient in SQL for querying and analyzing large datasets.. Experience with Docker and Kubernetes for containerization and orchestration of applications.. Basic knowledge of PySpark for distributed computing and data processing.. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions.. Deploy machine learning models into production environments and ensure scalability and reliability.. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment.. Experience in analysing complex problems and translate it into algorithms.. Backend development in Rest APIs using Flask, Fast API. Deployment experience with CI/CD pipelines. Working knowledge of handling data sets and data pre-processing through PySpark. Writing queries to target Casandra, PostgreSQL database.. Design Principles in application development.. The Skills You Bring. Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred.. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry.. Proven experience in model development, evaluation, and deployment.. Strong programming skills in Python and SQL.. Familiarity with Docker, Kubernetes, and PySpark.. Solid understanding of machine learning techniques and algorithms.. Experience working with cloud platforms, preferably GCP.. Excellent problem-solving skills and ability to work independently as well as part of a team.. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders.. Why join Ericsson?. At Ericsson, youll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of whats possible. To build solutions never seen before to some of the world’s toughest problems. Youll be challenged, but you won’t be alone. Youll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next.. What happens once you apply?. Click Here to find all you need to know about what our typical hiring process looks like.. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more.. Primary country and city: India (IN) || Bangalore. Req ID: 763993. Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Job Title: Data Engineer Location: Hyderabad, India Client: TechnoGens Client A Top Global Capability Center (GCC) Contact: Interested candidates can share profiles at bhavitha.g@technogenindia.com About the Team Join a dynamic and fast-paced environment with TechnoGen’s Client , a top-tier Global Capability Center (GCC), where the Enterprise Data and Analytics (ED&A) Delivery Team is driving the organization’s transformation into a data-driven powerhouse. Key focus areas include: Centralizing data from enterprise systems like ERP, E-Commerce, CRM, Order Management, etc., into a cloud-based data warehouse. Designing robust ETL/ELT pipelines using cutting-edge tools and frameworks. Delivering curated, business-ready datasets for strategic decision-making and self-service analytics. Enforcing enterprise-wide data quality, governance, and testing standards. Orchestrating workflows with tools like Airflow/Cloud Composer. Collaborating with analysts, product owners, and BI developers to meet business goals through data. Opportunity Overview We are hiring a Data Engineer / ETL Developer to join our client's Technology & Innovation Center in Hyderabad, India . This role involves designing, developing, and maintaining scalable data pipelines that support enterprise analytics and decision-making. You will work with modern data tools like Google BigQuery, Python, DBT, SQL , and Cloud Composer (Airflow) to integrate diverse data sources into a centralized cloud data warehouse. Key Responsibilities Design and develop scalable data integration pipelines for structured/semi-structured data from systems like ERP, CRM, E-Commerce, and Order Management. Build analytics-ready pipelines transforming raw data into curated datasets for reporting and insights. Implement modular and reusable DBT-based transformation logic aligned with business needs. Optimize BigQuery performance using best practices (partitioning, clustering, query tuning). Automate workflows with Cloud Composer (Airflow) for reliable scheduling and dependency handling. Write efficient Python and SQL code for ingestion, transformation, validation, and tuning. Develop and maintain strong data quality checks and validation mechanisms. Collaborate cross-functionally with analysts, BI developers, and product teams. Utilize modern ETL platforms like Ascend.io, Databricks, Dataflow, or Fivetran. Contribute to CI/CD processes, monitoring, and technical documentation. Desired Profile Bachelor's or Master's in Computer Science, Data Engineering, Information Systems, or related fields. 4+ years of hands-on experience in data engineering and analytics pipeline development. Expertise in: Google BigQuery Python for scripting and integration SQL for complex transformations and optimization DBT for modular transformation pipelines Airflow / Cloud Composer for orchestration Solid understanding of ETL/ELT development , data architecture , and governance frameworks . Preferred (Nice to Have) Experience with Ascend.io , Databricks , Fivetran , or Dataflow . Familiarity with tools like Collibra for data governance. Exposure to CI/CD pipelines , Git-based workflows, and infrastructure automation. Experience with event-driven streaming (Pub/Sub, Kafka). Agile methodology experience (Scrum/Kanban). Excellent communication and problem-solving skills. Ability to handle architecture and design troubleshooting on the ground. Quick learner with an innovative mindset. Apply Now: Interested candidates, please send your updated profile to bhavitha.g@technogenindia.com

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Cloud Data Scientist to build and scale data science solutions in cloud-native environments. Ideal for candidates who specialize in analytics and machine learning using cloud ecosystems. Key Responsibilities: Design predictive and prescriptive models using cloud ML tools Use BigQuery, SageMaker, or Azure ML Studio for scalable experimentation Collaborate on data sourcing, transformation, and governance in the cloud Visualize insights and present findings to stakeholders Required Skills & Qualifications: Strong Python/R skills and experience with cloud ML stacks (AWS, GCP, or Azure) Familiarity with cloud-native data warehousing and storage (Redshift, BigQuery, Data Lake) Hands-on with model deployment, CI/CD, and A/B testing in the cloud Bonus: Background in NLP, time series, or geospatial analysis Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 1 month ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.

Posted 1 month ago

Apply

5.0 - 10.0 years

0 - 2 Lacs

Bengaluru

Work from Office

Role : Pyspark Developer Experience : 5+ Years Work Location : Bangalore (5 Days - WFO) Mode of interview : F2F (Bangalore) Date of interview : 21st Jun & 22nd Jun (Saturday & Sunday) Timings : 10:30 AM to 3 PM Skills Required: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Role & responsibilities

Posted 1 month ago

Apply

10.0 - 18.0 years

30 - 45 Lacs

Hyderabad, Bengaluru

Hybrid

Role & responsibilities We are seeking a Data Architect / Lead to support the implementation of a SaaS-based solution and Integration with data platform to ensure seamless data ingestion into the Data Lake. The ideal candidate should have a strong technical background in data engineering and system integration and the ability to coordinate with multiple stakeholders effectively. Added Advantages if the candidate has, 1. Orbit, which will be integrated with Customers data platform via a data pipeline. This role will involve co-ordinating with Orbit, Apigee, Data Platform and Dell Boomi teams 2. Involved in adhering to GxP processes and validation, ensuring compliance with regulatory requirements, safeguarding patient safety, maintaining product quality, and mitigating risks in the life sciences industry Primary Skills (Must have experience) : Strong background in Data Engineering and Application Integration with experience in data pipelines and API management. Familiarity with Apigee and Dell Boomi for data ingestion and integrations. Hands-on experience with data processing and cloud-based integrations. At least 10+ years of solid experience as a Data Architect / Technical Lead for Integrating multiple systems like SaaS, Data Platforms using ETL and API based solutions. Act as the SPOC/Lead for technical services to assigned business area(s). Strong Experience with AWS data services, particularly Starburst and similar technologies, for efficient data querying and integration Recommend technical solutions to meet short- and long-range business needs. Lead development and manage execution. Align with business process needs and ensure rigorous validation of the GxP process to maintain compliance with regulatory requirements Collaborate with the Project Teams, Stakeholders ensuring successful data integration into the customers Data Platform development. Coordinate with offshore and onshore teams, including Orbit, Apigee, Dell Boomi, and Vendor data platform teams . Provide technical guidance on data ingestion, application integration processes, and troubleshooting. Work with cross-functional teams to resolve technical challenges and streamline processes. Maintain effective communication with stakeholders to align on project progress and requirements. Overlap with the EST timezone (from 12 PM EST onwards) to facilitate collaboration. Secondary Skills (Good To have)* : Ability to coordinate between multiple teams (Onshore, and offshore, vendor teams). Strong problem-solving and troubleshooting skills. Excellent communication and stakeholder management abilities. Experience working in life sciences, healthcare, GxP process & compliance, patient safety domains is a plus.

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Navi Mumbai

Work from Office

As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too. Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations. Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune

Hybrid

Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Engineering Manager to lead a team building data pipelines, models, and analytics infrastructure. Ideal for experienced engineers who can manage both technical delivery and team growth. Key Responsibilities: Lead development of ETL/ELT pipelines and data platforms Manage data engineers and collaborate with analytics/data science teams Architect systems for data ingestion, quality, and warehousing Define best practices for data architecture, testing, and monitoring Required Skills & Qualifications: Strong experience with big data tools (Spark, Kafka, Airflow) Proficiency in SQL, Python, and cloud data services (e.g., Redshift, BigQuery) Proven leadership and team management in data engineering contexts Bonus: Experience with real-time streaming and ML pipeline integration Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

6.0 - 9.0 years

10 - 24 Lacs

Gurugram

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using Snowflake, AWS/GCP * Collaborate with cross-functional teams on ETL processes & data modeling

Posted 1 month ago

Apply

2.0 - 6.0 years

0 - 1 Lacs

Pune

Work from Office

As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies