Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
9 - 14 years
20 - 30 Lacs
Allahabad
Work from Office
Pyspark/NoSQL is Mandatory) 1. Person should be strong in Pyspark 2. Should have hands on in MWAA (Airflow) / AWS EMR(Hadoop, Hive) framework 3. Hands on and working knowledge in Python 4. Knowledge in AWS services like EMR , S3, Lamda, Step Function, Aurora RDS. 5. Having good knowldge on RDBMS Any SQL 6. Person should work as Individual contributor Good to have: 1. Having experience to convert large set of data from RDMBS to No SQL. 2. Having experience to build data lake & configuration on delta tables. 3. Having good experience with computing & cost optimization. 4. Understanding the environment and use case and ready to build holistic frame works. Soft skill: 1. Having good communication to interact with IT-Stake holders and Business. Understand the pain point to delivery.
Posted 2 months ago
3 - 8 years
3 - 8 Lacs
Pune
Work from Office
Job Title: Medical Records & Data Management Specialist - Diagnostic Imaging Work Location : Baner, Pune Employment Type : Full-time, On-premise (8 hours/day) Department : Radiology Support / Health Information Management Position Overview As a Medical Records & Data Management Specialist , you will be responsible for handling high-volume patient records related to radiology and diagnostic imaging. You will ensure data accuracy, completeness, and timely entry into our healthcare systems. The role demands strong domain knowledge, precision, confidentiality, and a working knowledge of healthcare documentation and EMR/EHR systems. Key Responsibilities Manage end-to-end data entry and digital filing of radiology and pathology reports (CT, MRI, X-ray, Ultrasound, etc.) into the medical records systems. Cross-check diagnostic information, ensure standardization of formats, and maintain consistency in patient records. Collaborate closely with radiologists, technologists, and IT staff to ensure records are accurate and accessible. Work with PACS/RIS systems and internal EMR tools to index, tag, and archive medical documents. Track and retrieve patient data on request for follow-ups, audits, and clinical reviews. Identify missing or inconsistent data and ensure timely rectification. Maintain strict confidentiality in line with HIPAA or equivalent data privacy regulations. Support compliance for internal audits, data verification, and medical coding teams as needed. Desired Candidate Profile Education Bachelors Degree in Life Sciences, Health Information Management, Radiologic Technology, or Allied Health fields (preferred). Postgraduate diploma or certification in Medical Records Management , Health Information Systems , or Hospital Administration is a strong plus. Experience Minimum 4-6 years of experience in medical data entry, ideally in a radiology/imaging center , diagnostic lab, or hospital setup. Proven experience working with PACS/RIS/EMR/EHR platforms is essential. Familiarity with radiology report structure, medical terminology, and DICOM workflows is highly desirable. Previous exposure to HIPAA , ICD-10 , or healthcare compliance frameworks is a bonus. Skills & Competencies Strong proficiency in MS Excel and medical database tools. Strong written and verbal communication skills. Working Hours 8 hours/day, Monday to Saturday, Flexibility to work late night if required
Posted 2 months ago
5 - 10 years
15 - 30 Lacs
Bengaluru
Work from Office
Urgent Hiring: AWS Data Engineer, Senior Data Engineers & Lead Data Engineers Apply Now: Send your resume to heena.ruchwani@gspann.com Location: Bangalore (5+ Years Experience) Company: GSPANN Technologies, Inc. GSPANN Technologies is seeking talented professionals with 4+ years of experience to join our team in Bangalore. We are looking for immediate joiners who are passionate about data engineering and eager to take on exciting challenges. Key Skills & Experience: 4+ years of hands-on experience with AWS Data Services (Glue, Redshift, S3, Lambda, EMR, Athena, etc.) Strong expertise in Big Data Technologies (Spark, Hadoop, Kafka) Proficiency in SQL, Python, and Scala Hands-on experience with ETL pipelines, data modeling, and cloud-based data solutions Location: Bangalore Apply Now: Send your resume to heena.ruchwani@gspann.com Immediate Joiners Preferred! If you're ready to contribute to dynamic, data-driven projects and advance your career with GSPANN Technologies, apply today!
Posted 2 months ago
10 - 13 years
27 - 32 Lacs
Bengaluru
Work from Office
Department: ISS Reports To: Head of Data Platform - ISS Grade : 7 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like youre part of something bigger. Department Description ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process.These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.
Posted 2 months ago
5 - 8 years
2 - 6 Lacs
Navi Mumbai
Work from Office
Skill required: Order to Cash - Account Management Designation: Order to Cash Operations Senior Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years What would you do? You will be aligned with our Finance Operations vertical and will be helping us in determining financial outcomes by collecting operational data/reports, whilst conducting analysis and reconciling transactions.Optimizing working capital, providing real-time visibility and end-to-end management of revenue and cash flow, and streamlining billing processes. This team over looks the entire processes that starts from customers inquiry, sales order to delivery and invoicing. The Cash Application Processing team focuses on solving queries related to cash applications and coordination with the customers. The role requires a good understanding of cash applications, the process of applying unapplied cash, reconciliation of suspense account in cash application, and process them from payment receipt to finalization.Implement client account plans through relationship development and opportunity pursuits that builds deeper client relationships. Includes monitoring existing services to identify opportunities that provide additional and innovative value to the client. What are we looking for? * Analytical Thinking Read & understand the issues/problems* Healthcare experience EPIC or ORMB Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day to day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts
Posted 2 months ago
3 - 5 years
3 - 5 Lacs
Navi Mumbai
Work from Office
Skill required: Order to Cash - Account Management Designation: Order to Cash Operations Analyst Qualifications: Any Graduation Years of Experience: 3 to 5 years Language - Ability: English(International) - Advanced What would you do? You will be aligned with our Finance Operations vertical and will be helping us in determining financial outcomes by collecting operational data/reports, whilst conducting analysis and reconciling transactions.Optimizing working capital, providing real-time visibility and end-to-end management of revenue and cash flow, and streamlining billing processes. This team over looks the entire processes that starts from customers inquiry, sales order to delivery and invoicing. The Cash Application Processing team focuses on solving queries related to cash applications and coordination with the customers. The role requires a good understanding of cash applications, the process of applying unapplied cash, reconciliation of suspense account in cash application, and process them from payment receipt to finalization.Implement client account plans through relationship development and opportunity pursuits that builds deeper client relationships. Includes monitoring existing services to identify opportunities that provide additional and innovative value to the client. What are we looking for? * Analytical Thinking Read & understand the issues/problems* Healthcare experience EPIC or ORMB Roles and Responsibilities: In this role you are required to do analysis and solving of lower-complexity problems Your day to day interaction is with peers within Accenture before updating supervisors In this role you may have limited exposure with clients and/or Accenture management You will be given moderate level instruction on daily work tasks and detailed instructions on new assignments The decisions you make impact your own work and may impact the work of others You will be an individual contributor as a part of a team, with a focused scope of work Please note that this role may require you to work in rotational shifts
Posted 2 months ago
3 - 6 years
10 - 15 Lacs
Pune
Work from Office
Role & responsibilities Requirements- -3+ years of hands-on experience with AWS services including EMR, GLUE, Athena, Lambda, SQS, OpenSearch, CloudWatch, VPC, IAM, AWS Managed Airflow, security groups, S3, RDS, and DynamoDB. -Proficiency in Linux and experience with management tools like Apache Airflow and Terraform. Familiarity with CI/CD tools, particularly GitLab. Responsibilities- -Design, deploy, and maintain scalable and secure cloud and on-premises infrastructure. -Monitor and optimize performance and reliability of systems and applications. -Implement and manage continuous integration and continuous deployment (CI/CD) pipelines. -Collaborate with development teams to integrate new applications and services into existing infrastructure. -Conduct regular security assessments and audits to ensure compliance with industry standards. -Provide support and troubleshooting assistance for infrastructure-related issues. -Create and maintain detailed documentation for infrastructure configurations and processes.
Posted 2 months ago
1 - 6 years
3 - 6 Lacs
Mumbai Suburbs, Navi Mumbai, Mumbai (All Areas)
Work from Office
JOB DESCRIPTION DATA ENGINEER Department: Technology Location: Mumbai - Lower Parel Employment Type: Internship / Full time Roles & Responsibilities Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data- related technical issues and support their data infrastructure needs. Key Skills / Requirements We are looking for a candidate in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Advanced SQL knowledge and knowledge in relational/document databases, query authoring (SQL) as well as working familiarity with a variety of databases. Designing and implementing efficient and scalable data pipelines to extract, transform, and load (ETL) data from various sources. Building and maintaining robust data architectures, including databases and data warehouses, to ensure data availability and integrity. Strong analytic skills related to working with unstructured datasets. Knowledge in AWS cloud services: EC2, EMR, RDS, Redshift Knowledge in Python is must Benefits Competitive salary packages and bonuses. Mediclaim plans for you and your dependents
Posted 2 months ago
6 - 10 years
4 - 7 Lacs
Chennai
Work from Office
Min 5+ years relevant Experience A good candidate for DAP prod support would be a good communicator with experience supporting some/all of our stack (Snowflake AWS services for data (EMR, lambda, S3 StreamSets (or Informatica or similar) and ideally could help us creating dashboards and alerts. Should have excellent Inter Personal skill. Contact Person - Hanna Contact Number- 9840082217 Email - hanna@gojobs.biz
Posted 2 months ago
0 - 5 years
0 - 0 Lacs
Chennai
Work from Office
Role & responsibilities Here's what we offer: Practice and Learn: Work on real-world cases with a team of senior doctors who support your growth. Leadership Pathways: The opportunity to rise into leadership roles and make a significant impact as we scale our practice. Recognition and Respect: We deeply value your contributions every effort is acknowledged and celebrated. A Unique Model of Care: Practice Natural Molecular Therapy (NMT), a breakthrough treatment approach for chronic diseases, with patients under your care from the comfort of a digital environment. Global Reach: Deliver healthcare to patients worldwide, all from a digital platform think of us as a hospital in the cloud. Work-Life Balance: While we have a 12-hour workday (6 days a week), we deeply value your personal well-being. Our office environment is designed to promote a supportive, balanced lifestyle where hard work is paired with ample opportunities to recharge and grow. Youll be working with a dynamic, empathetic team at Amura Health, where collaboration and mutual respect ensure that no one is left behind. And as part of our culture, we prioritize mental health, team camaraderie, and personal development, making your journey here both fulfilling and rewarding. Preferred candidate profile Fresh MBBS Graduates or working professionals ready to explore the future of medicine and healthcare at Amura Health. Passionate about learning, growth, and clinical excellence. Comfortable working in a tech-driven, innovative environment with an openness to virtual care. Committed to hard work, but also looking for a place where your efforts are recognized and respected. Interested in becoming a part of a rockstar team at Amura Health that holds itself to the highest standards of professionalism, empathy, and innovation. Perks and benefits Competitive Salary: 10.8 Lakh per annum (90,000 CTC per month). Unmatched Learning Opportunities: Get firsthand exposure to NMT, a revolutionary form of healthcare
Posted 2 months ago
1 - 2 years
2 - 4 Lacs
Noida
Work from Office
Role & responsibilities Follow up with the Insurance company to check on claim status. Identify denial reason and work on resolution. Save claim from getting written off by timely following up. Insurance Collection Insurance Ageing. Will be involved in various AR reports preparation such as Aging reports, Collection reports etc. Analyzing Claims. Initiate telephone calls to insurance companies requesting status of claim in queue regarding past due invoices and establishment payment arrangements. Meet Quality and productivity standards. Processing the Health insurance claims. Contact insurance companies for further explanation of denials & underpayments. Take appropriate action on claims to guarantee resolution. Auditing the claims Ensure accurate & timely follow up where required. Review denials to determine necessary steps for Claim review Respond to client inquiries via phone and email regarding account or software issues. NOTE : It's available only for Noida/Ghaziabad/Mayur Vihar/New Ashok Nagar/Laxmi Nagar/Vinod Nagar/Ghazipur/Khora candidates. Perks and benefits
Posted 2 months ago
8 - 10 years
27 - 30 Lacs
Pune
Work from Office
Responsibilities Design, develop, and implement scalable and efficient data pipelines in the cloud using Python, SQL, and relevant technologies. Build and maintain data infrastructure on platforms such as AWS, leveraging services like EMR, Redshift, and others. Collaborate with data scientists, analysts, and other stakeholders to understand their requirements and provide the necessary data solutions. Develop and optimize ETL (Extract, Transform, Load) processes to ensure the accuracy, completeness, and timeliness of data. Create and maintain data models, schemas, and database structures using PostgreSQL and other relevant database technologies. Experience reporting tools such as Superset, (good to have : Domo, or Tableau, Quicksight) to develop visually appealing and insightful data visualizations and dashboards. Monitor and optimize the performance and scalability of data systems, ensuring high availability and reliability. Implement and maintain data security and privacy measures to protect sensitive information. Collaborate with the engineering team to integrate data solutions into existing applications or build new applications as required. Stay up-to-date with industry trends, emerging technologies, and best practices in data engineering Qualifications: Bachelor's or master's degree in Computer Science, Engineering, or a related field. 8+ years of experience. Strong proficiency in Python and SQL for data manipulation, analysis, and scripting. Extensive experience with cloud platforms, particularly AWS, and working knowledge of services like EMR, Redshift, and S3. Solid understanding of data warehousing concepts and experience with relational databases like PostgreSQL. Familiarity with data visualization and reporting tools such as Superset, Domo, or Tableau. Experience with building and maintaining data pipelines using tools like Airflow. Knowledge of Python web frameworks like Flask or Django for building data-driven applications. Strong problem-solving and analytical skills, with a keen attention to detail. Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Proven ability to work in a fast-paced environment, prioritize tasks, and meet deadlines. If you are a talented Data Engineer with a passion for leveraging data to drive insights and impact, we would love to hear from you. Join our team and contribute to building robust data infrastructure and pipelines that power our organization's data-driven decision-making process
Posted 2 months ago
3 - 7 years
6 - 16 Lacs
Bengaluru
Work from Office
Job Description: AWS Data engineer We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 3 to 7 years Location- Bangalore, Pune, Hyderabad, Coimbatore, Delhi NCR, Mumbai Key Responsibilities:1. Design and implement scalable, high-performance data pipelines using AWS services2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake4. Create and manage analytics solutions using Amazon Athena and Redshift5. Design and implement database solutions using Aurora, RDS, and DynamoDB6. Develop serverless workflows using AWS Step Functions7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards9. Collaborate with data scientists and analysts to support their data needs10. Optimize data architecture for performance and cost-efficiency11. Troubleshoot and resolve data pipeline and infrastructure issues Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions- Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics- ETL/ELT processes- Data Lake architectures Version control: Git- Agile methodologies
Posted 2 months ago
3 - 8 years
6 - 16 Lacs
Mumbai
Work from Office
Job Description AWS Data engineer We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 3 to 7 years Location- Bangalore, Pune, Hyderabad, Coimbatore, Delhi NCR, Mumbai Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 2 months ago
3 - 8 years
6 - 16 Lacs
Bengaluru
Work from Office
Job Description AWS Data engineer We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 3 to 7 years Location- Bangalore, Pune, Hyderabad, Coimbatore, Delhi NCR, Mumbai Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 2 months ago
8 - 13 years
10 - 20 Lacs
Bengaluru
Hybrid
Design, develop, and maintain data pipelines using AWS EMR, Databricks, PySpark, SQL, and Airflow. Optimize and troubleshoot data processing workflows. Ensure data quality and integrity. Implement data security and compliance measures.
Posted 2 months ago
2 - 6 years
12 - 16 Lacs
Bengaluru
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to haveFamiliarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 2 months ago
2 - 6 years
12 - 16 Lacs
Bengaluru
Work from Office
Design, construct, install, test, and maintain highly scalable data management systems using big data technologies such as Apache Spark (with focus on Spark SQL) and Hive. Manage and optimize our data warehousing solutions, with a strong emphasis on SQL performance tuning. Implement ETL/ELT processes using tools like Talend or custom scripts, ensuring efficient data flow and transformation across our systems. Utilize AWS services including S3, EC2, and EMR to build and manage scalable, secure, and reliable cloud-based solutions. Develop and deploy scripts in Linux environments, demonstrating proficiency in shell scripting. Utilize scheduling tools such as Airflow or Control-M to automate data processes and workflows. Implement and maintain metadata-driven frameworks, promoting reusability, efficiency, and data governance. Collaborate closely with DevOps teams utilizing SDLC tools such as Bamboo, JIRA, Bitbucket, and Confluence to ensure seamless integration of data systems into the software development lifecycle. Communicate effectively with both technical and non-technical stakeholders, for handover, incident management reporting, etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated expertise in Big Data Technologies, specifically Apache Spark (focus on Spark SQL) and Hive. Extensive experience with AWS services, including S3, EC2, and EMR. Strong expertise in Data Warehousing and SQL, with experience in performance optimization Experience with ETL/ELT implementation (such as Talend) Proficiency in Linux, with a strong background in shell scripting Preferred technical and professional experience Familiarity with scheduling tools like Airflow or Control-M. Experience with metadata-driven frameworks. Knowledge of DevOps tools such as Bamboo, JIRA, Bitbucket, and Confluence. Excellent communication skills and a willing attitude towards learning
Posted 2 months ago
2 - 6 years
12 - 16 Lacs
Bengaluru
Work from Office
Design, construct, install, test, and maintain highly scalable data management systems using big data technologies such as Apache Spark (with focus on Spark SQL) and Hive. Manage and optimize our data warehousing solutions, with a strong emphasis on SQL performance tuning. Implement ETL/ELT processes using tools like Talend or custom scripts, ensuring efficient data flow and transformation across our systems. Utilize AWS services including S3, EC2, and EMR to build and manage scalable, secure, and reliable cloud-based solutions. Develop and deploy scripts in Linux environments, demonstrating proficiency in shell scripting. Utilize scheduling tools such as Airflow or Control-M to automate data processes and workflows. Implement and maintain metadata-driven frameworks, promoting reusability, efficiency, and data governance. Collaborate closely with DevOps teams utilizing SDLC tools such as Bamboo, JIRA, Bitbucket, and Confluence to ensure seamless integration of data systems into the software development lifecycle. Communicate effectively with both technical and non-technical stakeholders, for handover, incident management reporting, etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated expertise in Big Data Technologies, specifically Apache Spark (focus on Spark SQL) and Hive. Extensive experience with AWS services, including S3, EC2, and EMR. Strong expertise in Data Warehousing and SQL, with experience in performance optimization Experience with ETL/ELT implementation (such as Talend) Proficiency in Linux, with a strong background in shell scripting Preferred technical and professional experience Familiarity with scheduling tools like Airflow or Control-M. Experience with metadata-driven frameworks. Knowledge of DevOps tools such as Bamboo, JIRA, Bitbucket, and Confluence. Excellent communication skills and a willing attitude towards learning
Posted 2 months ago
2 - 6 years
12 - 16 Lacs
Kochi
Work from Office
Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 months ago
3 - 8 years
10 - 14 Lacs
Bengaluru
Work from Office
About The Role Role :Program Manager Career Band : C1 Description Experience with Epic system components and administration: Epic System ComponentsThrough understanding of the Epic systems components, infrastructure and system architecture. Epic certifications on system administration. User Management:Knowledge on Administering user accounts, roles, and permissions to specific functionalities within the Epic system. System Monitoring:Continuously monitoring the performance and health of the Epic system to ensure optimal operation and quick resolution of any issues. Data Security:Implementing and maintaining robust security measures to protect patient data and ensure compliance with healthcare regulations. System Updates:Experience with applying updates and patches to the Epic system to keep it current with the latest features and security enhancements. Training and Support:Providing training and support to end-users to ensure they can effectively use the Epic system and troubleshoot any issues that arise. Experience on migration of Epic Systems to Azure Cloud: 6+ years of relevant experience on migration of Epic systems to Azure Cloud Planning and Evaluation:Experience Microsoft and Epic to evaluate the task and lay out an appropriate plan Infrastructure Setup:Setting up the necessary Azure infrastructure, including virtual machines, storage, and networking components, to support the Epic system. Data Migration:Migrating all relevant data from the on-premises Epic system to the Azure cloud, ensuring data integrity and minimal downtime. Testing and Validation:Conducting thorough testing to ensure that the Epic system functions correctly in the Azure environment and meets performance and security requirements. Go-Live and Support:Providing ongoing support to address any issues that arise and ensure a smooth transition for end-users.
Posted 2 months ago
3 - 6 years
4 - 8 Lacs
Karnataka
Work from Office
Cerner Scheduling and Registration About The Role 3 10 Yrs Exp LocationHyd/Blr/Pune/Chennai Common About The Role Experience with Cerner Application support, Incident resolution, Implementation of Cerner Millennium Projects. Experience in configuring and troubleshooting CERNER solution functionalities/Components. Perform complex troubleshooting investigations and documenting notes and knowledge articles. Gather requirements and determine scope of work and plan for on time delivery. Ability to work self sufficiently on assigned time sensitive tasks. Develop and maintain good relationship with peers and client, provide timely feedback to encourage success. Strong communication skills with excellent interpersonal skills both in written and verbal correspondence. Ability to learn and adapt to changing landscape and acquire new skills with technology advancement and to work, coordinate with global teams. Readiness towards work at odd hours/on call and weekends as and when needed. Should be open to shifts. Required Skills: Cerner Scheduling and Registration Experience in Cerner Scheduling and Registration PMO office, PMDB tools, PMDB document, PM conversation, PM rules, scheduling tool, SCH DB tools, SCH reporting
Posted 2 months ago
2 - 7 years
7 - 11 Lacs
Bengaluru
Work from Office
Description Please find below the description for Privacy developers. I need estimates for two developers, both offshore. Can you please send asap? Experience with workflow actions using Apache Freemarker; Understanding of JSON structure; Being able to perform minor adjustments in the Apigee configuration; These resources shall be responsible for implementing the enhancements needed in the OneTrust application. Also, they need to be able to adjust the current integrations to reflect the configurations in OneTrust. Knowledge about OneTrust in the modules Privacy Rights Automation and Data Mapping is a plus. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills OneTrust Privacy developer Languages RequiredENGLISH Role Rarity To Be Defined
Posted 2 months ago
7 - 11 years
12 - 16 Lacs
Bengaluru
Work from Office
About us: Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. . About the Role: As a Lead Data Engineer, you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions. You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability. Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving. You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives. Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus. Key Responsibilities: Architect and buildscalable, high-performance data pipelinesanddistributed data processing solutionsusingHadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure). Design and implementreal-time and batch data processing solutions, ensuring data is efficiently processed and made available for analytical and operational use. Develop APIs and data servicesto exposelow-latency, high-throughputdata for downstream applications, enabling real-time decision-making. Optimize and enhancedata models, workflows, and processing frameworksto improve performance, scalability, and cost-efficiency. Drivedata governance, security, and compliancebest practices. Collaborate withdata scientists, product teams, and business stakeholdersto understand requirements and deliverdata-driven solutions. Lead thedesign, implementation, and lifecycle managementof data services and solutions. Stay up to date withemerging technologiesand drive adoption of best practices inbig data engineering, cloud computing, and API development. Providetechnical leadership and mentorshipto engineering teams, promoting best practices indata engineering and API design. About You: 7+ years of experience in data engineering, software development, or distributed systems. Expertise in big data technologiessuch asHadoop, Spark, and distributed processing frameworks. Strong programming skills in Scala and/or Java(Python is a plus). Experience with cloud platforms (AWS, GCP, or Azure)and theirdata ecosystem(e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.). Proficiency in API developmentusingREST, GraphQL, or gRPCto serve real-time and batch data. Experience with real-time and streaming data architectures(Kafka, Flink, Kinesis, etc.). Strong knowledge ofdata modeling, ETL pipeline design, and performance optimization. Understanding ofdata governance, security, and compliancein large-scale data environments. Experience with Customer Data Platforms (CDP) or customer-centric data processingis a strong plus. Strong problem-solving skills and ability to work incomplex, unstructured environments. Excellent communication and collaboration skills, with experience working incross-functional teams. Why Join Us? Work with cutting-edgebig data, API, and cloud technologiesin a fast-paced, collaborative environment. Influence and shape thefuture of data architecture and real-time data servicesat Target. Solvehigh-impact business problemsusingscalable, low-latency data solutions. Be part of a culture that valuesinnovation, learning, and growth.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has a growing demand for professionals skilled in EMR (Electronic Medical Records) due to the increasing digitalization of healthcare systems. EMR jobs in India offer a wide range of opportunities for job seekers looking to make a career in this field.
The average salary range for EMR professionals in India varies from ₹3-5 lakhs per annum for entry-level positions to ₹10-15 lakhs per annum for experienced professionals.
In the field of EMR, a typical career path may involve progressing from roles such as EMR Specialist or EMR Analyst to Senior EMR Consultant, and eventually to EMR Project Manager or EMR Director.
In addition to expertise in EMR systems, professionals in this field are often expected to have skills in healthcare data analysis, healthcare IT infrastructure, project management, and knowledge of healthcare regulations.
As you explore EMR jobs in India, remember to showcase your expertise in EMR systems, healthcare data management, and project management during interviews. Prepare confidently and stay updated with the latest trends in the field to enhance your career prospects. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2