Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
20 - 25 Lacs
Pune
Work from Office
Designation: Big Data Lead/Architect Location: Pune Experience: 8-10 years NP - immediate joiner/15-30 days notice Reports To – Product Engineering Head Job Overview We are looking to hire a talented big data engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes, collaborate with development teams, build cloud platforms, and maintain the production system. To ensure success as a big data engineer, you should have in-depth knowledge of Hadoop technologies, excellent project management skills, and high-level problem-solving skills. A top-notch Big Data Engineer understands the needs of the company and institutes scalable data solutions for its current and future needs. Responsibilities: Meeting with managers to determine the company’s Big Data needs. Developing big data solutions on AWS, using Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, Hadoop, etc. Loading disparate data sets and conducting pre-processing services using Athena, Glue, Spark, etc. Collaborating with the software research and development teams. Building cloud platforms for the development of company applications. Maintaining production systems. Requirements: 8-10 years of experience as a big data engineer. Must be proficient with Python & PySpark. In-depth knowledge of Hadoop, Apache Spark, Databricks, Delta Tables, AWS data analytics services. Must have extensive experience with Delta Tables, JSON, Parquet file format. Good to have experience with AWS data analytics services like Athena, Glue, Redshift, EMR. Familiarity with Data warehousing will be a plus. Must have Knowledge of NoSQL and RDBMS databases. Good communication skills. Ability to solve complex data processing, transformation related problems
Posted 1 month ago
5.0 - 10.0 years
12 - 16 Lacs
Noida
Work from Office
Increasing digitalization and flexibility of production processes presents outstanding potential. In Digital Industries, we enable our customers to unlock their full potential and drive digital transformation with a unique portfolio of automation and digitalization technologies. From hardware to software to services, weve got quite a lot to offer. How about you We blur the boundaries between industry domains by integrating the virtual and physical, hardware and software, design and manufacturing worlds. With the rapid pace of innovation, digitalization is no longer tomorrows idea. We take what the future promises tomorrow and make it real for our customers today. Join us - where your career meets tomorrow. Siemens EDA is a global technology leader in Electronic Design Automation software. Our software tools enable companies around the world to develop highly innovative electronic products faster and more efficiently. Our customers use our tools to push the boundaries of technology and physics to deliver better products in the increasingly complex world of chip, board, and system design. Questa Simulation Product It is a core R&D team working on multiple verticals of Simulation. A very energetic and enthusiastic team of motivated individuals. This role is based in Noida. But youll also get to visit other locations in India and globe, so youll need to go where this job takes you. In return, youll get the chance to work with teams impacting entire cities, countries, and the shape of things to come. Responsibilities: We are looking for a highly motivated software engineer to work in the QuestaSim R&D team of the Siemens EDA Development responsibilities will include core algorithmic advances and software design/architecture. You will collaborate with a senior group of software engineers contributing to final production level quality of new components and algorithms and to create new engines and support existent code. Self-motivation, self-discipline and the ability to set personal goals and work consistently towards them in a dynamic environment will go far towards chipping in to your success. We Are Not Looking for Superheroes, Just Super Minds! Weve got quite a lot to offer. How about you Required Experience: We seek a graduate with at least 5+ years of relevant working experience with B.Tech or M.Tech in CSE/EE/ECE from a reputed engineering college. Proficiency of C/C++, algorithm and data structures. Compiler Concepts and Optimizations. Experience with UNIX and / or LINUX platforms is vital. Basic Digital Electronics Concepts We value your knowledge of Verilog, System Verilog, VHDL Experience in parallel algorithms, job distribution. Understanding of ML/AI algorithms and their implementation in data-driven tasks Exposure to Simulation or Formal based verification methodologies would be a plus! The person should be self-motivated and can work independently. Should be able to guide others, towards project completion. Good problem solving and analytical skills A collection of over 377,000 minds building the future, one day at a time in over 200 countries. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us shape tomorrow! #LI-EDA #LI-HYBRID #DVT
Posted 1 month ago
8.0 - 13.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make optimal use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective processes. We are looking for Sr. AWS Cloud Architect Architect and Design Develop scalable and efficient data solutions using AWS services such as AWS Glue, Amazon Redshift, S3, Kinesis(Apache Kafka), DynamoDB, Lambda, AWS Glue(Streaming ETL) and EMR Integration Integrate real-time data from various Siemens organizations into our data lake, ensuring seamless data flow and processing. Data Lake Management Design and manage a large-scale data lake using AWS services like S3, Glue, and Lake Formation. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Snowflake Integration Implement and manage data pipelines to load data into Snowflake, utilizing Iceberg tables for optimal performance and flexibility. Performance Optimization Optimize data processing pipelines for performance, scalability, and cost-efficiency. Security and Compliance Ensure that all solutions adhere to security best practices and compliance requirements. Collaboration Work closely with cross-functional teams, including data engineers, data scientists, and application developers, to deliver end-to-end solutions. Monitoring and Troubleshooting Implement monitoring solutions to ensure the reliability and performance of data pipelines. Troubleshoot and resolve any issues that arise. Youd describe yourself as: Experience 8+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Strong knowledge of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. Find out more about Siemens careers at
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCA,BTech,MBA,MTech,MCA Service Line Application Development and Maintenance Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : At least 1 year of experience in HL7 FHIR implementation. Deep knowledge of HL7 FHIR 4.0.1 standard Knowledge of FHIR implementation guides like DaVinci, CARIN, US Core etc. Experience performing data mapping of Source data sets to FHIR resources Analyzes the business needs, defines detailed requirements, and provides potential solutions/approaches with the business stakeholders Strong experience and understanding of Agile Methodologies Strong written and oral communication and interpersonal skills Strong analytical, planning, organizational, time management and facilitation Skills Strong understanding and experience of SDLC and documentation skills Proficiency in Microsoft Suite (Word, Excel, Access, PowerPoint, Project, Visio, Outlook), Microsoft SQL Studio, JIRA Preferred Skills: Domain-Healthcare-Healthcare - ALL
Posted 1 month ago
5.0 - 10.0 years
9 - 13 Lacs
Pune
Work from Office
BA JD: Primary Responsibilities are as follows: 1. Creation of epics and writing of user stories in conjunction with the Product Owner/Business Teams, including non-functional requirements 2. Supports the Product Owner/Business Teams to manage the product backlog 3. Works with the Product Owner/Business Teams for documenting business requirements 4. Analysis of customer journeys, product features and impact on systems 5. Process analysis and improvement activities 6. Assesses operational considerations to support effective solution design 7. Collaborative interactions with teams from different locations and regions. 8. Functional Support to IT teams during Project Execution phase. Profile Expectations: 1. Post Graduate with background in Finance, MBA, CA 2. 7+ years of experience in Banks and/or as an IT BA on Banking Projects (preferably for Global Banks like HSBC) 3. Knowledge of Banking products (Deposits, Overdraft, Loans, Payments, Basics of Finance) and processes (On boarding, Fulfilment, Operations, Reporting 4. Effective communication skills, both written and verbal for technical and non-technical audience. 5. Experience with Agile Delivery Model
Posted 1 month ago
10.0 - 13.0 years
12 - 15 Lacs
Bengaluru
Work from Office
About the Opportunity Job TypeApplication 31 July 2025 TitlePrincipal Data Engineer (Associate Director) DepartmentISS LocationBangalore Reports ToHead of Data Platform - ISS Grade 7 Department Description ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process.These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics.Be accountable for technical delivery and take ownership of solutions.Lead a team of senior and junior developers providing mentorship and guidance.Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress.Drive technical innovation within the department to increase code reusability, code quality and developer productivity.Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house.Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3.Experience designing event-based or streaming data architectures using Kafka.Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python.Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation.Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements.Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings.Experience implementing CDC ingestion.Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes.Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making.Communication:Strong in strategic communication and stakeholder engagement.Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources. Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.
Posted 1 month ago
10.0 - 15.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Experience: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 1 month ago
2.0 - 7.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCA,BTech,MTech,MBA,MCA Service Line Application Development and Maintenance Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Domain experiencePayer core – claims/Membership/provider mgmt. Domain experienceProvider clinical/RCM, Pharmacy benefit management Healthcare Business Analysts - with Agile/Safe-Agile Business analysis experience Medicaid, Medicaid experienced Business Analysts FHIR, HL7 data analyst and interoperability consulting Healthcare digital transformation consultants with skills/experience of cloud data solutions design, Data analysis/analytics, RPA solution design KeywordsClaims, Provider, utilization management experience, Pricing,Agile, BA Preferred Skills: Domain-Healthcare-Healthcare - ALL Technology-Analytics - Functional-Business Analyst
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
: Key responsibilities include the following: Develop and maintain scalable data pipelines using Pyspark and proven experience as developer with expertise in PySpark. Good to have knowledge on Ab Initio. Experience with distributed computing and parallel processing . Proficiency in SQL and experience with database systems. Collaborate with data engineers and data scientists to understand and fulfil data processing needs. Optimize and troubleshoot existing PySpark applications for performance improvements. Write clean, efficient, and well-documented code following best practices. Participate in design and code reviews. Develop and implement ETL processes to extract, transform, and load data. Ensure data integrity and quality throughout the data lifecycle. Stay current with the latest industry trends and technologies in big data and cloud computing
Posted 1 month ago
8.0 - 13.0 years
5 - 10 Lacs
Hyderabad
Work from Office
6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.
Posted 1 month ago
8.0 - 13.0 years
5 - 10 Lacs
Bengaluru
Work from Office
6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.
Posted 1 month ago
8.0 - 13.0 years
8 - 12 Lacs
Hyderabad
Work from Office
10+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements.
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Gurugram
Work from Office
6-8 years of experience with at least 4 years of experience in test automation. Prior automation experience is a must. Familiarity with Python for test automation and scripting. Minimum 6 years of experience in QA/testing, with a focus on payments, ETL and data engineering projects. Excellent communication skills to work effectively with cross-functional teams. Good to Have: 6-8 years of experience in Payments/SWIFT/ISO/ ETL testing. Strong SQL skills for querying, comparing, and validating large datasets. Experience in testing ETL pipelines and data transformations- Prior experience testing ETL migrations from legacy systems like SAS DI to modern platforms is a plus. Hands-on experience with cloud platforms, particularly AWS services like S3, EMR, and PostgreSQL on AWS. Knowledge of SAS DI is highly desirable for understanding legacy pipelines.
Posted 1 month ago
8.0 - 13.0 years
3 - 7 Lacs
Hyderabad
Work from Office
P1-C3-STS Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift, Cloud Formation and other AWS serverless resources Cloud Formation and other AWS serverless resources
Posted 1 month ago
4.0 - 9.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Minimum 6 years of hands-on experience in data engineering or big data development roles. Strong programming skills in Python and experience with Apache Spark (PySpark preferred). Proficient in writing and optimizing complex SQL queries. Hands-on experience with Apache Airflow for orchestration of data workflows. Deep understanding and practical experience with AWS services: Data Storage & ProcessingS3, Glue, EMR, Athena Compute & ExecutionLambda, Step Functions DatabasesRDS, DynamoDB MonitoringCloudWatch Experience with distributed data processing, parallel computing, and performance tuning. Strong analytical and problem-solving skills. Familiarity with CI/CD pipelines and DevOps practices is a plus.
Posted 1 month ago
8.0 - 13.0 years
5 - 10 Lacs
Pune
Work from Office
Data Engineer Position Summary The Data Engineer is responsible for building and maintaining data pipelines ensuring the smooth operation of data systems and optimizing workflows to meet business requirements This role will support data integration and processing for various applications Minimum Qualifications 6 Years overall IT experience with minimum 4 years of work experience in below tech skills Tech Skills Proficient in Python scripting and PySpark for data processing tasks Strong SQL capabilities with hands on experience managing big data using ETL tools like Informatica Experience with the AWS cloud platform and its data services including S3 Redshift Lambda EMR Airflow Postgres SNS and EventBridge Skilled in BASH Shell scripting Understanding of data lakehouse architecture particularly with Iceberg format is a plus Preferred Experience with Kafka and Mulesoft API Understanding of healthcare data systems is a plus Experience in Agile methodologies Strong analytical and problem solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to align with business needs Implement and optimize data models for efficient querying and reporting Assist in the development and maintenance of data quality checks and monitoring processes Support the creation of data solutions that enable analytical capabilities Contribute to aligning data architecture with overall organizational solutions
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Experience: 8 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 1 month ago
10.0 - 15.0 years
30 - 35 Lacs
Noida
Work from Office
We at Innovaccer are looking for a Director-Clinical Informatics and you need to have structured problem-solving skills, strong analytical abilities, willingness to take initiatives and drive them, excellent verbal and written communication skills, and high levels of empathy towards internal and external stakeholders, among other things.The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions.Data is our bread and butter for innovation. We are looking for a leader who will own and manage the clinical ontologies at Innovaccer. He/She will also help Innovaccer build clinical workflows, and care protocols to facilitate clinical decision support at the point of care. A Day in the Life Built a new product development pipeline aligning the companys portfolio with market insights across persona using clinical decision support in EHRs.Owned market research and built business cases to enable prioritization and build/buy/partner assessment by executive-level innovation governance. Worked successfully in a matrixed environment across business units to understand the big picture,build cross-functional relationships, and leverage content assets to solve customer (internal and external) problems. Worked on a pioneering FHIR-based, EHR-integrated, patient context specific, evidence-based guideline solution to reduce care variability. Solid understanding of clinical informatics standards (FHIR, CCDA,CDS Hooks, etc.) and terminologies (RxNorm, LOINC, SNOMED, etc.) Built a successful Clinical Quality Improvement program for assessing clinical credibility of Nuances NLP engines for clinical documentation quality. Created buy-in from executive leadership and cross-functional alignment among stakeholders from product, engineering, and the implementation/customer success teams. Owned the creation of analytics and quality metrics for provider and payor benchmarking and its monetization, for the speech recognition and revenue cycle products. Worked with the CMO, CMIOs, clinical documentation specialists, and the Product-Engineering team to productize them Lead development of clinical content for clinical decision support (CDS) to improve clinical documentation. Collaborate with clinical informaticists, data scientists,, clinical SMEs, product, and engineering teams to build CDS solutions with a deep understanding of the EHR workflow. Managing and defining clinical ontologies and implementing industry best practices of building value sets. The role involves client interaction during US hours, so you should be comfortable working in that time zone What You Need Advanced healthcare degree (MD, PharmD, RN, or Master's in Health Informatics) with 10+ years of clinical informatics experience and 5+ years in managerial/leadership roles Deep technical expertise in clinical informatics standards (FHIR, HL7, CCDA, CDS Hooks) and terminologies (SNOMED CT, LOINC, RxNorm) with hands-on EHR experience Proven track record of implementing clinical decision support systems, EHR integrations, and healthcare analytics platforms in complex healthcare environments Strong clinical knowledge with understanding of care delivery processes, evidence-based medicine, clinical workflows, and regulatory requirements (HIPAA, CMS programs)
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
The candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. He/she must be able to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, he/she must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Part visual storyteller, part designer, part content creator, you will be responsible to create, enhance and support diverse and complex pre-sales and post sales threads and collateral development initiatives. Presentations/Collaterals for sales contexts/meetingsYou will be required to collaborate with strategists, subject matter experts and consultants to storyboard and create engaging and aesthetically intuitive presentations for various sales contexts and client meetings (pitches, workshops, point of views, response to request for proposals (RFPs), QBRs, SBRs etc). These presentations or collaterals are typically emailed /presented to CXO level and/or technical audiences in leading companies across the world. As part of this, you will occasionally pursue quick hits (with turnaround times as short as a day) and more frequently work on detailed work spanning a few days. Collaborate with peers and internal teams to source and create case studies, mock dashboards and sample deliverables to augment pitch decks and content readiness for upcoming pursuits. Multi-Format Sales & Marketing CollateralsStoryboard and create multi-format content/collaterals in the form of brochures, infographics, product sheets, sell sheets, banners, teasers, product demos and product videos. In addition, you will be required to support program-level initiatives such as newsletters and internal training programs. You will be responsible for organizing, managing and governing the steady stream of collaterals being produced and evangelized both internally and externally, in line with the processes defined by the team. Create and maintain a library of presentation templates for internal and external use Check and balance templates to ensure they are up-to-date and in line with company or client branding. Technical and Functional Skills Bachelors Degree with 5+ years of experience in presentation design and/or a creative visualizer role in sales and marketing contexts Strong knowledge and proficiency of presentation software such as PowerPoint (must-have), and Prezi (good-to-have) Proficiency with the Adobe Creative Suite (Aftereffects, Illustrator, InDesign) and a sound understanding of interoperability processes Proven talent in creative and visual thinking Excellent verbal and written communication skills Proven talent for transforming complex information into simple yet striking visualizations. An impressive portfolio (please share the link) that showcases what you would bring to this role
Posted 1 month ago
4.0 - 5.0 years
9 - 19 Lacs
Hyderabad
Work from Office
Hi All , We have immediate openings for Below Requirement Role : Hadoop Administration Skill : Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Experience : 4 to 9yrs Work location : Hyderabad Interview Mode : 1sr round virtual & 2nd round F2F Notice Period : 15 to immediate joiners only Interested candidates can share your cv to Mail : sravani.vommi@sonata-software.com Contact : 7075751998 JD FOR Hadoop Admin: Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Job Summary: We are seeking a highly skilled Hadoop Administrator with hands-on experience managing distributed data platforms such as Hadoop EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, and Neo4j. Key Responsibilities: Cluster Management: Administer, manage, and maintain Hadoop EMR clusters, ensuring optimal performance, high availability, and resource utilization. Handle the provisioning, configuration, and scaling of Hadoop clusters, with a focus on EMR, ensuring seamless integration with other ecosystem tools (e.g., Spark, Kafka, HBase). Oversee HBase configurations, performance tuning, and integration within the Hadoop ecosystem. Manage OpenSearch(formerly known as Elasticsearch) for log analytics and large-scale search applications. Data Integration & Processing: Oversee the performance and optimization of Apache Spark workloads across distributed data environments. Design and manage efficient data pipelines between Snowflake, Kafka, and the Hadoop ecosystem, ensuring seamless data movement and transformation. Implement data storage solutions in Snowflake and manage seamless data transfers to/from Hadoop(EMR) and other environments. Cloud & AWS Services: Work closely with AWS services such as EC2, S3,ECS, Lambda, IAM, RDS, and CloudWatch to build scalable, cost-efficient solutions for data management and processing. manage AWS EMR clusters, ensuring they are optimized for big data workloads and integrated with other AWS services. - Security & Compliance: Manage and configure Kerberos authentication and access control mechanisms within the Hadoop ecosystem (HDFS, YARN, Spark) to ensure data security. Implement encryption and secure data transfer policies within Hadoop clusters, Kafka, HBase, and OpenSearch to meet compliance and regulatory requirements. Manage user roles and permissions for access to Snowflake and ensure seamless integration of security policies across platforms. Monitoring & Troubleshooting: Set up and manage monitoring solutions to ensure the health of the Hadoop ecosystem and related components. Actively monitor and troubleshoot issues with Spark, Kafka, HBase, OpenSearch, and other distributed systems. Provide proactive support to address performance issues, bottlenecks, and failures. Automation & Optimization: Automate the deployment, scaling, and management of Hadoop and other big data systems using scripting languages (Bash, Python) . Optimize the configurations and performance of EMR, Spark, Kafka, HBase, OpenSearch. Develop scripts and utilities for backup, job monitoring, and performance tuning.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Develop and maintain data-driven applications using Scala and PySpark. Work with large datasets, performing data analysis, building data pipelines, and optimizing performance.
Posted 1 month ago
4.0 - 5.0 years
6 - 7 Lacs
Bengaluru
Work from Office
Develop and manage data pipelines using Snowflake. Optimize performance and data warehousing strategies.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough