Jobs
Interviews

319 Data Ingestion Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

14 - 17 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Key Responsibilities: Instrument Angular frontend and Java backend applications with GIL for effective logging and analytics. Design and implement client-side and server-side tracking mechanisms to capture key user and system activities. Aggregate, store, and process instrumentation data efficiently for reporting and analytics. Develop dashboards to summarize usage metrics, engagement patterns, and system health using modern visualization frameworks/tools. Create tracking for usage of different data sources (e.g., APIs, databases) and present metrics to business and technical stakeholders. Collaborate closely with product managers, UX designers, backend engineers, and data engineers to identify meaningful metrics and optimize tracking strategies. Required Skills and Experience: FrontendStrong experience with Angular & TypeScript BackendSolid experience with Java (Spring Boot preferred), REST APIs. Familiar with SQL and basic dashboarding to generate report Experience with instrumentation approaches. Understanding of data ingestion pipelines, event tracking, and aggregation techniques. Strong problem-solving skills and ability to work independently and collaboratively. Excellent verbal and written communication skills

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

2+ years of implementation experience with Adobe Experience Cloud products especially Adobe Experience Platform and Journey Optimizer Expertise in deploying, configuring and optimizing all major Experience Platform services and Journey Optimizer features Strong SQL skills for querying datasets, implementing data transformations, cleansing data, etc Hands-on experience developing custom applications, workflows and integrations using Experience Platform APIs Deep familiarity with Adobe Experience Platform and Journey Optimizer technical implementations including: Setting up source connectors and ingesting data using the API and UI Configuring Experience Events (XDM schemas) for capturing data from sources Creating audience segments using custom and AI/ML-based segments Triggering journeys and activations based on segments and profiles Implementing journey automations, actions and messages Integrating with destinations like CRM, email, etc. Hands-on experience developing using the Platform APIs and SDKs in a language like: JavaScript for API calls Java for extending functionality with custom connectors or applications Expertise in data modelling for multi-channel needs using the Experience Data Model (XDM) Familiarity with configuring IMS authentication to connect external systems to Platform APIs Experience developing, debugging and optimizing custom server-side applications Proficiency in JavaScript, JSON, REST APIs and SQL Expertise ingesting data from sources like Adobe Analytics, CRM, Ad Server, Web and Mobile Strong understanding of concepts like multi-channel data management, segmentation, orchestration, automation and personalization Understanding of data structures likeJSON for representing data in APIs, Data Tables for processing tabular data, Streaming data flows Experience automating technical tasks using tools likeAPI integration tests, Postman for API testing, Git for source control Excellent documentation and communication skills with the ability to clearly present technical recommendations to customers

Posted 1 month ago

Apply

0.0 - 1.0 years

2 - 3 Lacs

Noida

Work from Office

As an intern, you will play a key role in supporting our data operations by handling Level 1 (L1) alert monitoring for both ingestion and analytics pipelines. Youll be responsible for performing L1 troubleshooting on assigned ingestion and analytics tasks as part of our Business-As-Usual (BAU) activities. This role also involves cross-collaboration with multiple teams to ensure timely resolution of issues and maintaining the smooth functioning of data workflows. Its a great opportunity to gain hands-on experience in real-time monitoring, issue triaging, and inter-team coordination in a production environment. A Day in the life Create world class customer facing documentation which would delight and excite customers Remove ambiguity in understanding things by documenting things and hence making the teams more efficient and effective Convert tacit knowledge to implicit knowledge Handling L1 alert monitoring of ingestions and analytics. Doing L1 troubleshooting for issues in assigned ingestion/analytics tasks (BAUs). Cross collaborating with other teams to get issues resolved Adhere to JIRA processes to avoid SLA breaches Analysis and resolution of Low priority customer tickets What You Need Basic knowledge of SQL and able to write to SQL queries An ambitious person who can work in a flexible startup environment with only one thing in mind - getting things done. Excellent written and verbal communication skills Comfortable to work in weekend, night and rotational shifts Preferred Skills: SQL/ ETL / Python support, Support Processes (SLAs, OLAs, Product or application support) Data Ingestion, Analytics, Power BI

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

Design, implement, and optimize Big Data solutions using Hadoop technologies. You will work on data ingestion, processing, and storage, ensuring efficient data pipelines. Strong expertise in Hadoop, HDFS, and MapReduce is essential for this role.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI . Inviting applications for the role of Lead Consulta nt- Snowflake Data Engineer ( Python+Cloud ) ! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification is must. Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snowpipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/ Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, & Data Warehousing concepts Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Hyderabad

Work from Office

5+ Years of experience in developing Snowflake data models, data ingestion, views, Stored procedures, complex queries Good experience in SQL Experience in Informatica Power center / IICS ETL tools Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Provide production support for Data Warehouse issues such data load problems, transformation translation problems Ability to facilitate and coordinate discussion and to manage expectations of multiple stakeholders Candidate must have good communication and facilitation skills Work in an onsite-offshore model involving daily interactions with Onshore teams to ensure on-time quality deliverables

Posted 1 month ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Pune

Work from Office

Provides technical expertise, to include addressing and resolving complex technical issues. Demonstrable experience assessing application workloads and technology landscape for Cloud suitability, develop case and Cloud adoption roadmap Expertise on data ingestion, data loading, Data Lake, bulk processing, transformation using Azure services and migrating on-premises services to various Azure environments. Good experience of a range services from the Microsoft Azure Cloud Platform including Infrastructure and Security related services such as Azure AD, IaaS, Containers, Storage, Networking and Azure Security. Good experience of enterprise solution shaping and Microsoft Azure Cloud architecture development including excellent documentation skills. Good understanding of Azure and AWS cloud service offering covering Compute, Storage, Network, WebApp, Functions, Gateway, Clustering, Key Vault, AD. Design and Develop high performance, scalable and secure cloud native applications on Microsoft Azure along with Azure best practices/recommendations. Design, implement and improve possible automations for cloud environments using native or 3rd party tools like Terraform, Salt, Chef, puppet, Databricks, etc. Creating business cases for transformation and modernization, including analysis of both total cost of ownership and potential cost and revenue impacts of the transformation Advise and engage with the customer executives on their Azure and AWS cloud strategy roadmap, improvements, alignment by bringing in industry best practice/trends and work on further improvements with required business case analysis and required presentations Providing Microsoft Azure architecture collaboration with other technical teams Documentation of solutions (e.g. architecture, configuration and setup). Working within a project management/agile delivery methodology in a leading role as part of a wider team. Provide effective knowledge transfer and upskilling to relevant customer personnel to ensure an appropriate level of future self-sufficiency. Assist in transition of projects to Enterprise Services teams. Skills Required: Strong knowledge of Cloud security standards and principles including Identity and Access management in Azure. essential tohave strong, in-depth and demonstrable hands-on experience with the following technologies: Microsoft Azure and its relevant build, deployment, automation, networking and security technologies in cloud and hybrid environments. Azure stack hub , Azure stack HCI/Hyper-V clusters. Microsoft Azure IaaS , Platform As A Service ( PaaS ) products such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics Microsoft Windows server, System Centr, Hyper-V and storage spaces Knowledge of PowerShell, Git, ARM templates and deployment automation. Hands-on experience on Azure and AWS Cloud native automation framework to perform automation along with experience on Python, Azure services like Databricks, Data factory, Azure functions, Streamsets etc. Hands-on experience with IAC (Infrastructure as Code), Containers, Kubernetes (AKS), Ansible, Terraform, Docker, Linux Sys Admin (RHEL/Ubuntu/Alpine), Jenkins, building CI/CD pipelines in Azure Devops. Ability to define and design the technical architecture with best suited Azure components ensuring seamless end-end workflow from Data source to Power BI/Portal/Dashboards/UI. Skills Good to Have: Experience in building big data solutions using Azure and AWS services like analysis services, DevOps / Databases like SQL server, CosmosDB, Dynamo DB, Mongo DB and web service integration Possession of either the Developing Microsoft Azure Solutions and Architecting Microsoft Azure certifications.

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 8 Lacs

Chennai

Work from Office

We are looking immediatefor SnowflakeDataWarehouse Engineers_ Contract_ Chennai:Snowflake Data Warehouse Engineers5+yearsChennaiPeriodImmediateTypeContractDescription:- We need an experienced, collaborative Snowflake Data Warehouse Engineers with 5+ Yrs of experience in developing Snowflake data models, data ingestion, views, Stored procedures, complex queries Good experience in SQL Experience in Informatica Powercenter / IICS ETL tools Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Provide production support for Data Warehouse issues such data load problems, transformation translation problems Ability to facilitate and coordinate discussion and to manage expectations of multiple stakeholders Candidate must have good communication and facilitation skills Work in an onsite-offshore model involving daily interactions with Onshore teams to ensure on-time quality deliverables

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Data Engineer ensuring the smooth functioning of our applications and data systems. Your expertise in Data Ingestion, Release Management, Monitorization, Incident Review, Databricks, Azure Cloud, and Data Analysis will be instrumental in maintaining the reliability, availability, and performance of our applications and data pipelines. You will collaborate closely with cross-functional teams to support application deployments, monitor system health, analyze data, and provide timely resolutions to incidents. The ideal candidate should have a strong background in Azure DevOps, Azure Cloud specially ADF, Databricks, and AWS Cloud. List of Key Responsibilities: Implement and manage data ingestion processes to acquire data from various sources and ensure its accuracy and completeness in our systems. Collaborate with development and operations teams to facilitate the release management process, ensuring successful and efficient deployment of application updates and enhancements. Monitor the performance and health of applications and data pipelines, promptly identifying and addressing any anomalies or potential issues. Respond to incidents and service requests in a timely manner, conducting thorough incident reviews to identify root causes and implementing effective solutions to prevent recurrence. Utilize Databricks and monitoring tools to analyze application logs, system metrics, and data to diagnose and troubleshoot issues effectively. Analyze data-related issues, troubleshoot data quality problems, and propose solutions to optimize data workflows. Utilize Azure Cloud services to deploy and manage applications and data infrastructure efficiently. Document incident reports, resolutions, and support procedures for knowledge sharing and future reference. Continuously improve support processes and workflows to enhance efficiency, minimize downtime, and improve the overall reliability of applications and data systems. Stay up-to-date with the latest technologies and industry best practices related to application support, data analysis, and cloud services. Technical Knowledge: Technology Level of expertise* Priority Must Nice to have Scala X Spark X Azure Cloud Senior yes X AWS Cloud X Python X Databricks Senior yes X ADF yes X Rstudio/Rconnect Junior X

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Hybrid

We are seeking a skilled Database Specialist with strong expertise in Time-Series Databases, specifically Loki for logs, InfluxDB, and Splunk for metrics. The ideal candidate will have a solid background in query languages, Grafana, Alert Manager, and Prometheus. This role involves managing and optimizing time-series databases, ensuring efficient data storage, retrieval, and visualization. Key Responsibilities: Design, implement, and maintain time-series databases using Loki, InfluxDB, and Splunk to store and manage high-velocity time-series data. Develop efficient data ingestion pipelines for time-series data from various sources (e.g., IoT devices, application logs, metrics). Optimize database performance for high write and read throughput, ensuring low latency and high availability. Implement and manage retention policies, downsampling, and data compression strategies to optimize storage and query performance. Collaborate with DevOps and infrastructure teams to deploy and scale time-series databases in cloud or on-premise environments. Build and maintain dashboards and visualization tools (e.g., Grafana) for monitoring and analyzing time-series data. Troubleshoot and resolve issues related to data ingestion, storage, and query performance. Work with development teams to integrate time-series databases into applications and services. Ensure data security, backup, and disaster recovery mechanisms are in place for time-series databases. Stay updated with the latest advancements in time-series database technologies and recommend improvements to existing systems. Key Skills: Strong expertise in Time-Series Databases with Loki (for logs), InfluxDB, and Splunk (for metrics).

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Bengaluru

Remote

Job Title: Software Engineer GCP Data Engineering Work Mode: Remote Base Location: Bengaluru Experience Required: 4 to 6 Years Job Summary: We are seeking a Software Engineer with a strong background in GCP Data Engineering and a solid understanding of how to build scalable data processing frameworks. The ideal candidate will be proficient in data ingestion, transformation, and orchestration using modern cloud-native tools and technologies. This role requires hands-on experience in designing and optimizing ETL pipelines, managing big data workloads, and supporting data quality initiatives. Key Responsibilities: Design and develop scalable data processing solutions using Apache Beam, Spark, and other modern frameworks. Build and manage data pipelines on Google Cloud Platform (GCP) using services like Dataflow, Dataproc, Composer (Airflow), and BigQuery . Collaborate with data architects and analysts to understand data models and implement efficient ETL solutions. Leverage DevOps and CI/CD best practices for code management, testing, and deployment using tools like GitHub and Cloud Build. Ensure data quality, performance tuning, and reliability of data processing systems. Work with cross-functional teams to understand business requirements and deliver robust data infrastructure to support analytical use cases. Required Skills: 4 to 6 years of professional experience as a Data Engineer working on cloud platforms, preferably GCP . Proficiency in Java and Python with strong problem-solving and analytical skills. Hands-on experience with Apache Beam , Apache Spark , Dataflow , Dataproc , Composer (Airflow) , and BigQuery . Strong understanding of data warehousing concepts and ETL pipeline optimization techniques. Experience in cloud-based architectures and DevOps practices. Familiarity with version control (GitHub) and CI/CD pipelines . Preferred Skills: Exposure to modern ETL tools and data integration platforms. Experience with data governance, data quality frameworks , and metadata management. Familiarity with performance tuning in distributed data processing systems. Tech Stack: Cloud: GCP (Dataflow, BigQuery, Dataproc, Composer) Programming: Java, Python Frameworks: Apache Beam, Apache Spark DevOps: GitHub, CI/CD tools, Composer (Airflow) ETL/Data Tools: Data ingestion, transformation, and warehousing on GCP

Posted 1 month ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Big Data Developer - Spark,Scala,Pyspark BigDataDeveloper - Spark, Scala, Pyspark Coding & scripting Years of Experience5 to 12 years LocationBangalore Notice Period0 to 30 days Key Skills: - Proficient in Spark,Scala,Pyspark coding & scripting - Fluent inbigdataengineering development using the Hadoop/Spark ecosystem - Hands-on experience inBigData - Good Knowledge of Hadoop Eco System - Knowledge of cloud architecture AWS -Dataingestion and integration into theDataLake using the Hadoop ecosystem tools such as Sqoop, Spark, Impala, Hive, Oozie, Airflow etc. - Candidates should be fluent in the Python / Scala language - Strong communication skills 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Python for Insights. Experience5-8 Years.

Posted 1 month ago

Apply

4.0 - 9.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Roles and Responsibilities Develop product strategy, roadmap, and backlog to drive business growth. Collaborate with cross-functional teams to deliver high-quality products that meet customer needs. Analyze market trends, competitors, and customer feedback to inform product decisions. Ensure effective communication with stakeholders through regular updates on product progress. Desired Candidate Profile 4-9 years of experience in Product Management or related field (Analytics). Strong understanding of Agile methodology, Scrum framework, and SDLC life cycle. Proficiency in tools such as JIRA, BRD, Use Cases, User Stories, Data Ingestion, SQL.

Posted 1 month ago

Apply

3.0 - 5.0 years

8 - 17 Lacs

Gurugram

Work from Office

Roles and Responsibilities : 1. Support effective clinical analysis through the development of enriched data, leveraging analytical experience to guide data selection, data visualization, and additional analysis as appropriate 2. Work with internal and external partners to develop data and requirements in support of high quality clinical analytics 3. Gather information for project-related research; analyze that information; and produce reports and analyses as appropriate 4. Work with engineering team on the development of new data, quality control of reports 5. Identify appropriate techniques for a given analysis; implement analysis through technical programming; assess results and adjust for future iterations 6. Develop oral and written summaries of findings for internal and external audiences 7. Communicate effectively with internal and external stakeholders on project design, progress, outcomes, and any related constraints 8. Prioritize, plan, and track project progress. 9. Perform other duties and responsibilities as required, assigned, or requested. JOB REQUIREMENTS Qualifications Advanced degree (Masters or PhD) in healthcare, computer science, finance, statistics, or analytics Experience with data visualization tools, including Tableau Experience with clinical trial design or evaluation Certifications to support technical skills Preferred 4-6 years of experience in analytics intensive role Strong proficiency with SQL and its variation among popular databases Experience in reporting tools like SSRS and/orPower BII Skilled at optimizing large, complicated T-SQL statements and index design. Familiarity with data visualization tools like AWS QuickSight or Tableau Familiarity with Data Analytics Platforms, data ingestion, ETL, Predictive model, AI, ML Familiarity with healthcare file standards like HL7, FHIR ,and CCDA preferred. Proficient understanding of code versioning tools such as GitHub, andframeworksk such as AWS, Azure, FTP/sFTP/VPN protocols Familiarity with Software Development Life Cycle (SDLC), Agil,,,e and Waterfall processes. Ability to work in a fast-paced, result-driven, and complex healthcare setting. Excellent analytical problem-solving organization and time management skills. Takes accountability and ownership Capable of embracingunexpected changese in direction or priority. Excellent communication skills. Deep proficiency in standard applications such as Microsoft Word, Excel, and Project Knowledge of EHR systems specifically Athena Healthnet, Acume ,and EPIC,i beneficial. Email hr@gmanalyticssolutions.in Contact: 9205015655

Posted 1 month ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Bengaluru, Karnataka

Work from Office

We are seeking a highly skilled GCP Data Engineer with experience in designing and developing data ingestion frameworks, real-time processing solutions, and data transformation frameworks using open-source tools. The role involves operationalizing open-source data-analytic tools for enterprise use, ensuring adherence to data governance policies, and performing root-cause analysis on data-related issues. The ideal candidate should have a strong understanding of cloud platforms, especially GCP, with hands-on expertise in tools such as Kafka, Apache Spark, Python, Hadoop, and Hive. Experience with data governance and DevOps practices, along with GCP certifications, is preferred.

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Chennai

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 1 month ago

Apply

2.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Mechanical Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Mechanical Engineer, you will design, analyze, troubleshoot, and test electro-mechanical systems and packaging. Qualcomm Engineers collaborate across functions to provide design information and complete project deliverables. Minimum Qualifications: Bachelor's degree in Mechanical Engineering or related field and 2+ years of Mechanical Engineering or related work experience. OR Master's degree in Mechanical Engineering or related field and 1+ year of Mechanical Engineering or related work experience. OR PhD in Mechanical Engineering or related field. Job Overview The successful candidate will operate as a member of Corporate Engineering department. Responsibilities include working with US and India teams to conceptualize and design high performance electronic assemblies for Mobile, Modem, Compute, Auto, IOT etc. Specific tasks include daily use of Creo solid modeling software, concept layouts, defining rigid and flexible circuit board outline and mounting constraints, electronic file transfers to/from circuit board CAD designers, shielding/gasket design, design of mechanical parts and system assemblies, creating detailed mechanical drawings, specifying dimensional tolerances and tolerance stackups, creating BOMs for assemblies, working with internal teams for optimal selection of batteries, displays, and audio components, working with internal machine shop for prototypes, and working with external suppliers for design collaboration and fabrication, including injection molded tooling. Candidate will perform thermal analysis and simulations needed for the system design and enable heatsink designs and cooling requirements, work with Flotherm, Ansys ICEPAK, and other simulation tools as needed. The candidate will interface with internal staff and outside partners in the fast-paced execution of a variety of multi-disciplined projects. Leadership skills are expected on all projects to help direct, capture and interface with outside consultants. Minimum Qualifications 2+ years actively involved in the mechanical engineering, Thermal Engineering and design of high-density electronics packaging. 2+ years’ experience in a mechanical / Thermal engineering role designing electronics products / systems. Solid modeling experience utilizing Creo, Simulation experience in Flotherm or ANSYS icepak. Preferred Qualifications Expected to possess a strong understanding of mechanical engineering, thermal engineering and design fundamentals. Experience with defining outline, mounting, and I/O constraints of electronic circuit boards. Demonstrated ability for tolerance analysis and specification, including GD&T fundamentals. Demonstrated success in working with HW, component engineering, and other cross functional teams. Demonstrated success in working with HW and thermal analysis teams for appropriate thermal mitigation techniques. Solid understanding of design fundamentals for CNC, injection molded and stamped metal parts. Demonstrated ability working with supplier base for successful design collaboration and volume production. Strong experience with Creo solid model assembly, part, and drawing creation and Data storage. Experience with top-down skeleton model structuring and master model design techniques within Creo. Experience with IDF/EMN file exchange between mechanical and HW teams. Understands project goals and individual contribution toward those goals. Effectively communicates with project peers and engineering personnel via e-mail, web meetings, and instant messaging including status reports and illustrative presentation slides. Interact and collaborate with other internal mechanical engineers for optimal product development processes and schedule execution. Possess excellent verbal and written communication skills Leadership skills to help direct, prioritize, and clearly set project tasks per schedule. Communicate milestones and directions to outside team members.

Posted 1 month ago

Apply

2.0 - 5.0 years

0 - 3 Lacs

Mumbai, Pune

Work from Office

Salesforce data cloud Developer JD Key Responsibilities: Design, configure, and implement solutions within Salesforce Data Cloud to unify customer profiles across sources. Develop and maintain data ingestion pipelines , identity resolution rules, calculated insights, and activation targets. Collaborate with marketing, sales, analytics, and IT teams to define data requirements and use cases. Create and manage data streams and harmonization rules for ingesting data from Salesforce, cloud storage, and external systems. Implement and maintain segmentations and activation strategies using Data Cloud tools. Write and optimize SQL, JSON, and data transformation logic for calculated insights and unification. Monitor data quality, performance, and governance of the Data Cloud environment. Participate in solution architecture, integration planning, and technical documentation. Stay updated on Salesforce Data Cloud enhancements, best practices, and industry trends. Required Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. 2+ years of Salesforce development experience, including Salesforce CDP/Data Cloud . Strong understanding of data integration, customer identity resolution, and segmentation strategies. Hands-on experience with Salesforce Data Cloud features like: Data Streams & Data Model Objects (DMOs) Identity Resolution Rules Calculated Insights Segments & Activations Proficiency in SQL , JSON, and data mapping techniques. Experience with Salesforce tools (e.g., Marketing Cloud, MuleSoft, CRM Analytics ) is a plus. Salesforce certifications such as: Salesforce Certified Data Cloud Consultant

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Kolkata

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : Fulltime 15 years qualificationRole and Responsibilities:1. Design, create, code, and support a variety of data pipelines and models on GCP cloud technology 2. Strong hand-on exposure to GCP services like BigQuery, Composer etc.3. Partner with business/data analysts, architects, and other key project stakeholders to deliver data requirements.4. Developing data integration and ETL (Extract, Transform, Load) processes.5. Support existing Data warehouses & related pipelines.6. Ensuring data quality, security, and compliance.7. Optimizing data processing and storage efficiency, troubleshoot issues in Data space.8. Seeks to learn new skills/tools utilized in Data space (ex:dbt, MonteCarlo etc.)9. Excellent communication skills- verbal and written, Excellent analytical skills with Agile mindset.10. Demonstrates strong affinity towards paying attention to details and delivery accuracy.11. Self-motivated team player and should have ability to overcome challenges and achieve desired results.12. Work effectively in Global distributed environment.13. Employee should be ready to work in shift B i.e. 12:30 pm to 10:30 pm14. Employee should be ready to work as individual contributor Skill Proficiency Expectation:Expert:Data Storage, BigQuery,SQL,Composer,Data Warehousing ConceptsIntermidate Level:PythonBasic Level/Preferred:DB,Kafka, Pub/Sub Additional Information:- The candidate should have a minimum of 5 years of experience in Google BigQuery.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions.- This position is based at our Mumbai office. Qualification Fulltime 15 years qualification

Posted 1 month ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Kolkata

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 1 month ago

Apply

10.0 - 12.0 years

25 - 30 Lacs

Bengaluru

Remote

Responsibilities : - Lead and implement end-to-end technical solutions within D365 Customer Insights (Data and Journeys) to meet diverse client requirements, from initial design to deployment and support. - Design and configure CI data unification processes, including data ingestion pipelines, matching and merging rules, and segmentation models to create comprehensive and actionable customer profiles. - Demonstrate deep expertise in data quality management and ensuring data integrity. - Proficiency in integrating data with CI-Data using various methods, including standard connectors, API calls, and custom ETL pipelines (Azure Data Factory, SSIS). - Experience with different data sources and formats. - Hands-on experience with the Power Platform, including Power Automate (flows for data integration and automation), Power Apps (for custom interfaces and extensions), and Dataverse (data modeling and storage). - Strong skills in JavaScript, Power Fx, or other scripting languages relevant to CI customization and plugin development. - Ability to develop, test, and deploy custom functionalities, workflows, and plugins to enhance CI capabilities. - Proven experience in customer journey mapping, marketing automation, and campaign management using CI-Journeys. - Ability to design and implement personalized customer journeys based on data insights and business objectives. - Proactively troubleshoot and resolve technical issues within CI-Data and CI-Journeys environments, focusing on data integrity, performance optimization, and system stability. - Conduct root cause analysis and implement effective solutions. - Strong analytical and problem-solving skills, with experience leveraging CI data analytics to generate actionable business insights and recommendations. - Ability to translate data into compelling narratives and visualizations. - Excellent written and verbal communication skills for effectively liaising with stakeholders, clients, and internal teams. - Ability to clearly articulate technical concepts to both technical and non-technical audiences. - Provide technical mentorship and guidance to junior team members. - Contribute to knowledge sharing and best practices within the team. - Stay up-to-date with the latest D365 Customer Insights features, updates, and best practices. - Proactively seek opportunities to expand your technical knowledge and skills. Required Skills & Experience : - 9+ years of experience working with D365 Customer Insights (Data and Journeys), with a strong focus on technical implementation and configuration. - In-depth understanding of data unification, segmentation, and profile creation within CI-Data. - Proficiency in data integration with CI-Data using connectors, API calls, and custom ETL pipelines. - Hands-on experience with Power Platform tools (Power Automate, Power Apps, Dataverse). - Strong skills in JavaScript, Power Fx, or other scripting languages used for CI customization and plugin development. - Proven experience in customer journey mapping, marketing automation, and campaign management within CI-Journeys. - Deep understanding of marketing and customer experience principles and their application within CI. - Strong analytical and problem-solving skills, with experience using CI data analytics to drive business insights. - Excellent written and verbal communication skills.

Posted 1 month ago

Apply

8.0 - 12.0 years

6 - 14 Lacs

Bengaluru

Remote

Job Summary We are looking for a highly skilled Cloud Engineer with a strong background in real-time and batch data ingestion and data processing, azure products-devops, azure cloud. The ideal candidate should have a deep understanding of streaming architectures and performance optimization techniques in cloud environments, preferably in subsurface domain. Key Responsibilities Automation experience essential : Scripting, using PowerShell. ARM Templates, using JSON (PowerShell also acceptable) Azure DevOps with CI/CD, Site Reliability Engineering Must be able to understand the concept of how the applications function. The ability to priorities workload and operate across several initiatives simultaneously Update and maintain the Kappa-Automate database and connectivity with the pi historian and data lake Participate in troubleshooting, performance tuning, and continuous improvement of the Kappa Automate platform Designing and implementing highly configurable Deployment pipelines in Azure Configuring Delta Lake on Azure Databricks Apply performance tuning techniques such as partitioning, caching, and cluster Working on various Azure storage types Work with large volumes of structured and unstructured data, ensuring high availability and performance. Collaborate with cross-functional teams (data scientists, analysts, business users) Qualifications • Bachelors or Master’s degree in Computer Science, Information Technology, or a related field. • 8+ years of experience in data engineering or a related role. • Proven experience with Azure technologies.

Posted 1 month ago

Apply

10.0 - 12.0 years

19 - 25 Lacs

Mumbai

Remote

D365 Customer Insights (Data and Journeys) Technical Lead Location : Remote Experience : 9+ Years Responsibilities : - Lead and implement end-to-end technical solutions within D365 Customer Insights (Data and Journeys) to meet diverse client requirements, from initial design to deployment and support. - Design and configure CI data unification processes, including data ingestion pipelines, matching and merging rules, and segmentation models to create comprehensive and actionable customer profiles. - Demonstrate deep expertise in data quality management and ensuring data integrity. - Proficiency in integrating data with CI-Data using various methods, including standard connectors, API calls, and custom ETL pipelines (Azure Data Factory, SSIS). - Experience with different data sources and formats. - Hands-on experience with the Power Platform, including Power Automate (flows for data integration and automation), Power Apps (for custom interfaces and extensions), and Dataverse (data modeling and storage). - Strong skills in JavaScript, Power Fx, or other scripting languages relevant to CI customization and plugin development. - Ability to develop, test, and deploy custom functionalities, workflows, and plugins to enhance CI capabilities. - Proven experience in customer journey mapping, marketing automation, and campaign management using CI-Journeys. - Ability to design and implement personalized customer journeys based on data insights and business objectives. - Proactively troubleshoot and resolve technical issues within CI-Data and CI-Journeys environments, focusing on data integrity, performance optimization, and system stability. - Conduct root cause analysis and implement effective solutions. - Strong analytical and problem-solving skills, with experience leveraging CI data analytics to generate actionable business insights and recommendations. - Ability to translate data into compelling narratives and visualizations. - Excellent written and verbal communication skills for effectively liaising with stakeholders, clients, and internal teams. - Ability to clearly articulate technical concepts to both technical and non-technical audiences. - Provide technical mentorship and guidance to junior team members. - Contribute to knowledge sharing and best practices within the team. - Stay up-to-date with the latest D365 Customer Insights features, updates, and best practices. - Proactively seek opportunities to expand your technical knowledge and skills. Required Skills & Experience : - 9+ years of experience working with D365 Customer Insights (Data and Journeys), with a strong focus on technical implementation and configuration. - In-depth understanding of data unification, segmentation, and profile creation within CI-Data. - Proficiency in data integration with CI-Data using connectors, API calls, and custom ETL pipelines. - Hands-on experience with Power Platform tools (Power Automate, Power Apps, Dataverse). - Strong skills in JavaScript, Power Fx, or other scripting languages used for CI customization and plugin development. - Proven experience in customer journey mapping, marketing automation, and campaign management within CI-Journeys. - Deep understanding of marketing and customer experience principles and their application within CI. - Strong analytical and problem-solving skills, with experience using CI data analytics to drive business insights. - Excellent written and verbal communication skills.

Posted 1 month ago

Apply

8.0 - 10.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Your Role & Responsibilities : - You will own a part of the Microsoft Ecosystem solution architecture (HCLTech Microsoft Industry Cloud) technology stack and portfolio to ensure alignment with both HCLTech and Microsoft sales and strategy priorities. - Continually review, understand, and analyze Microsoft strategy and technology roadmap and ensure that this is contextualized and communicated, as a joint PoV, to the HCLTech key stakeholders through reports, documents, and roadmaps. - Support proactive and reactive sales motions including active participation solution response creation and in customer and advisor group conversations. - Maintain messaging, capability packs, solution, and creative briefs for in-scope sections of the overall HCLTech Microsoft Industry Cloud technology stack. - Capabilities should be considered across the Design, Sell, Implement, Delivery and Govern, and tailors to appropriate stakeholder group. - Based on awareness of appropriate strategies, industry and market trends and customer needs ensure that HCLTech Product and Service owners are creating products and offerings that correctly exploit the solution stack. - Ensure the HCLTech standard services are validated by MSFT and are published in the MSFT sales services catalogue and that MSFT field sales are incentivized to sell through the indirect channel. - Track relevant Microsoft funding programs, enabling services and ensuring that HCLTech generates revenue through this channel to fund strategic programs. - Work collaboratively with Product and Services owners and other stakeholders to ensure product revision and launch plans are in place and executed perfectly to create market momentum. - As needed to ensure plans are suitably amplified within Microsoft stakeholder groups. - Working with teams within HCLTech, Microsoft and with third parties to making sure that all stakeholders understand the differentiated value of the solution stack and how HCLTech is uniquely differentiated to drive digital transformation across the Microsoft cloud. - As the point of connection between both companies the GTM Manager will manage relationships with a variety of senior stakeholders with sales, commercial and technical backgrounds. - This will involve solving issues, address escalations and working proactively to ensure that services can be sold, implemented, and supported throughout the entire lifecycle. - Strategies and services offerings are developed through many channels, with multiple partners, across multiple HCLTech lines of Business and with numerous external stakeholders. - A strong Program Management approach is necessary to track, report and ensure timely completion of all activities. Qualifications & Experience : Minimum Requirements : - While deep technical skills are not required for this role, candidates should have a minimum of 8-10 years' experience in a technical role with a GSI. - Successful candidates must be able to apply their experience to solve business issues, create excited customers and build commercially viable products. - The Cloud business is continually changing so continual learning and ability to research new topics is critical for success in this role. - Experience of Product or Category Management, including ownership of the entire lifecycle and financial reporting is a key aspect of the role and candidate should be able to demonstrate development of a product portfolio and market share. - The candidate should have experience in similar roles, ideally within a company of similar scale and scope to HCLTech. - Experience of one or more of the following industry verticals. - Manufacturing, including Industry 4, IoT enabled connected services. - Lifescience and Healthcare, including connected and smart platforms and medtech. - Experience of one or more of the following horizontal workloads. - Data and Analytics, included data ingestion, storage, governance, and AI (Microsoft IDP/Fabric and OpenAI). - Cybersecurity, including managed services, edge through core to device (Sentinel, Defender). - Digital Workplace including Teams, Viva and front-line work services. Necessary Skills : - Celebrates team first and individual successes and creates a safe environment for constructive feedback. - Ability to work flexibly among quickly changing priorities while consistently delivering to tight deadlines. - High levels of influencing skills including professional, effective, and persuasive oral and written communications to all audience levels, technical and non-technical, including executives. - Ability to be collaborative and collegial and possess the confidence to make tough decisions. - Anticipates problems and sees how a problem and its solution will affect other projects, people, or processes. - Should not only be able to identify and document dependencies, conflicts, roadblocks, and issues but solve them focusing on agreed deadlines. - Must be able to set own priorities, follow-through, and work towards agreed targets with minimum of supervision. - Technology is rapidly changing and evolving, and successful candidates must take responsibility for the continual development of their own skills. - Strong executive presence including written communication, executive reporting, and presentation skills with a high degree of comfort to large and small technical audiences. - Inclusive and collaborative - driving teamwork and cross-team alignment and building relationships and networks across the various ecosystems to leverage ideas and best practice. - Methodical approach to detail is important. - Should be able to identify sources of data and analyze information from multiple sources including technology trends, buyer behaviors and financial performance to create actionable and positive recommendations.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies