Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6 - 11 years
10 - 20 Lacs
Chennai, Pune, Coimbatore
Work from Office
Business Analyst: We definitely need both SQL knowledge + Domain Experience in Healthcare Revenue Cycle as listed in the experience section of the job description EXP- 6+ Years Location- Pune/Chennai/Coimbatore/Remote Prior experience working as a Revenue Cycle analyst in the Healthcare system. Must have a working knowledge of relational databases/database structures, SQL experience desired Must have a strong understanding of data collected and used in EMRs, Revenue Cycle and Patient Accounting Systems. Knowledge and experience with ambulatory and acute clinical, billing and claims workflows and clinical information systems.
Posted 3 months ago
5 - 8 years
8 - 10 Lacs
Hyderabad
Work from Office
S&P Dow Jones Indices is seeking a Python/Bigdata developer to be a key player in the implementation and support of data Platforms for S&P Dow Jones Indices. This role requires a seasoned technologist who contributes to application development and maintenance. The candidate should actively evaluate new products and technologies to build solutions that streamline business operations. The candidate must be delivery-focused with solid financial applications experience. The candidate will assist in day-to-day support and operations functions, design, development, and unit testing. Responsibilities and Impact: Lead the design and implementation of EMR / Spark workloads using Python, including data access from relational databases and cloud storage technologies. Implement new powerful functionalities using Python, Pyspark, AWS and Delta Lake. Independently come up with optimal designs for the business use cases and implement the same using big data technologies. Enhance existing functionalities in Oracle/Postgres procedures, functions. Performance tuning of existing Spark jobs. Respond to technical queries from operations and product management team. Implement new functionalities in Python, Spark, Hive. Enhance existing functionalities in Postgres procedures, functions. Collaborate with cross-functional teams to support data-driven initiatives. Mentor junior team members and promote best practices. Respond to technical queries from the operations and product management team. What Were Looking For: Basic Required Qualifications: Bachelors degree in computer science, Information Systems, or Engineering, or equivalent work experience. 5 - 8 years of IT experience in application support or development. Hands on development experience on writing effective and scalable Python programs. Deep understanding of OOP concepts and development models in Python. Knowledge of popular Python libraries/ORM libraries and frameworks. Exposure to unit testing frameworks like Pytest. Good understanding of spark architecture as the system involves data intensive operations. Good amount of work experience in spark performance tuning. Experience/exposure in Kafka messaging platform. Experience in Build technology like Maven, Pybuilder. Exposure with AWS offerings such as EC2, RDS, EMR, lambda, S3,Redis. Hands on experience in at least one relational database (Oracle, Sybase, SQL Server, PostgreSQL). Hands on experience in SQL queries and writing stored procedures, functions. A strong willingness to learn new technologies. Excellent communication skills, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Proficiency in building data analytics solutions on AWS Cloud. Experience with microservice and serverless architecture implementation.
Posted 3 months ago
8 - 13 years
8 - 18 Lacs
Bangalore Rural
Work from Office
Hi, We are looking for AWS administrator. Please check the below job description. Responsibilities: 1. AWS Administration: - Manage and maintain a complex and scalable AWS environment, ensuring optimal performance, security, and reliability. - Implement and monitor AWS services, including EC2 instances, S3 storage, EMR, RDS databases, Lambda functions, and more. - Troubleshoot and resolve infrastructure issues, network connectivity problems, and performance bottlenecks. - Collaborate with cross-functional teams to design and implement solutions that align with business needs. 2. Production Deployment: - Lead and execute the end-to-end process of production deployments, and release coordination. - Develop and maintain deployment pipelines using tools such as AWS CodePipeline, Jenkins, or similar. - Implement best practices for version control, configuration management, and infrastructure as code (IaC) using tools like Git and CloudFormation. - Perform release management, rollback procedures, and ensure zero-downtime deployments. - Collaborate with development and QA teams to ensure proper testing and validation of deployment processes. 3. Infrastructure Automation: - Automate routine operational tasks, such as provisioning and scaling of resources, using infrastructure automation tools. - Create and maintain scripts and templates for infrastructure provisioning and configuration management. 4. Security and Compliance: - Implement security best practices for AWS environments, including identity and access management (IAM), encryption, and monitoring. - Ensure compliance with industry standards and regulations, and actively participate in security audits and assessments. 5. Performance Optimization: - Monitor system performance and make recommendations for improvements to ensure high availability and scalability. - Optimize AWS resources to maximize cost efficiency while maintaining performance. 6. Documentation and Knowledge Sharing: - Document system architecture, configurations, and deployment processes. - Share knowledge and mentor junior team members in AWS administration and production deployment practices. Interested Candidates, please share your CV to shereena.muthukutty@thakralone.in
Posted 3 months ago
2 - 3 years
0 - 0 Lacs
Mumbai
Work from Office
Job Title: Product Engineer - Big Data Location: Mumbai Experience: 3 - 8 Yrs Job Summary: As a Product Engineer - Big Data , you will be responsible for designing, building, and optimizing large-scale data processing pipelines using cutting-edge Big Data technologies. Collaborating with cross-functional teams--including data scientists, analysts, and product managers--you will ensure data is easily accessible, secure, and reliable. Your role will focus on delivering high-quality, scalable solutions for data storage, ingestion, and analysis, while driving continuous improvements throughout the data lifecycle. Key Responsibilities: ETL Pipeline Development & Optimization: Design and implement complex end-to-end ETL pipelines to handle large-scale data ingestion and processing. Utilize AWS services like EMR, Glue, S3, MSK (Managed Streaming for Kafka), DMS (Database Migration Service), Athena, and EC2 to streamline data workflows, ensuring high availability and reliability. Big Data Processing: Develop and optimize real-time and batch data processing systems using Apache Flink, PySpark, and Apache Kafka . Focus on fault tolerance, scalability, and performance. Work with Apache Hudi for managing datasets and enabling incremental data processing. Data Modeling & Warehousing: Design and implement data warehouse solutions that support both analytical and operational use cases. Model complex datasets into optimized structures for high performance, easy access, and query efficiency for internal stakeholders. Cloud Infrastructure Development: Build scalable cloud-based data infrastructure leveraging AWS tools. Ensure data pipelines are resilient and adaptable to changes in data volume and variety, while optimizing costs and maximizing efficiency using Managed Apache Airflow for orchestration and EC2 for compute resources. Data Analysis & Insights: Collaborate with business teams and data scientists to understand data needs and deliver high-quality datasets. Conduct in-depth analysis to derive insights from the data, identifying key trends, patterns, and anomalies to drive business decisions. Present findings in a clear, actionable format. Real-time & Batch Data Integration: Enable seamless integration of real-time streaming and batch data from systems like AWS MSK . Ensure consistency in data ingestion and processing across various formats and sources, providing a unified view of the data ecosystem. CI/CD & Automation: Use Jenkins to establish and maintain continuous integration and delivery pipelines. Implement automated testing and deployment workflows, ensuring smooth integration of new features and updates into production environments. Data Security & Compliance: Collaborate with security teams to ensure data pipelines comply with organizational and regulatory standards such as GDPR, HIPAA , or other relevant frameworks. Implement data governance practices to ensure integrity, security, and traceability throughout the data lifecycle. Collaboration & Cross-Functional Work: Partner with engineers, data scientists, product managers, and business stakeholders to understand data requirements and deliver scalable solutions. Participate in agile teams, sprint planning, and architectural discussions. Troubleshooting & Performance Tuning: Identify and resolve performance bottlenecks in data pipelines. Ensure optimal performance through proactive monitoring, tuning, and applying best practices for data ingestion and storage. Skills & Qualifications: Must-Have Skills: AWS Expertise: Hands-on experience with core AWS services related to Big Data, including EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, Athena, and EC2 . Strong understanding of cloud-native data architecture. Big Data Technologies: Proficiency in PySpark and SQL for data transformations and analysis. Experience with large-scale data processing frameworks like Apache Flink and Apache Kafka . Data Frameworks: Strong knowledge of Apache Hudi for data lake operations, including CDC (Change Data Capture) and incremental data processing. Database Modeling & Data Warehousing: Expertise in designing scalable data models for both OLAP and OLTP systems. In-depth understanding of data warehousing best practices. ETL Pipeline Development: Proven experience in building robust, scalable ETL pipelines for processing real-time and batch data across platforms. Data Analysis & Insights: Strong problem-solving skills with a data-driven approach to decision-making. Ability to conduct complex data analysis to extract actionable business insights. CI/CD & Automation: Basic to intermediate knowledge of CI/CD pipelines using Jenkins or similar tools to automate deployment and monitoring of data pipelines. Required Skills Big Data,Etl, AWS
Posted 3 months ago
7 - 12 years
0 - 0 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Hiring for Top MNC (For Long term contract) THE CANDIDATE MUST BE ABLE TO CREATE VARIETY OF INFRA AS CODE ON AWS THAT WILL BE LEVERAGED BY DATA ENGINEERING TEAMS. Candidate will use tools like cloud formation (CFM) to create IAC, deploy and operate the platform based on requirements from DEs. 2 FTEs; 5+ years of relevant experience senior PE Ability to lead a small team (2 pizzas squad) Ability to create scalable & stable serverless architecture/design Expert in Python (Object Oriented) development Expert in writing Python Unit Tests Extensive use of the following AWS services S3 Lambda Glue SQS IAM DynamoDB CloudWatch Event Bridge Step Functions EMR (inc. serverless) Redshift (inc. serverless) API Gateway and/or AppSync Optionally, AWS Lake Formation DMS DataSync Appflow Fluent with REST API Experience with CFM and CDK Knowledge of Data Engineering principles using the services above is important Previous experience with Azure AD and OAuth2 Previous experience in BDD (Business Driven Development) testing (or Integration Testing) Optionally, previous experience with Node.js/React development
Posted 3 months ago
0 - 1 years
2 - 3 Lacs
Bengaluru
Work from Office
Position Summary An Associate, Work Up - DRM, is primarily responsible for collaborating in the secondary phase of the donor and patient matching process for unrelated donors, with a focus on facilitating stem cell donations from unrelated donors. This position necessitates regular engagement with vital stakeholders, both domestically and internationally, which encompasses the DKMS International Medical Team, prospective and verified stem cell donors, healthcare experts, National and International Stem Cell Registries, as well as counterparts within various DKMS entities worldwide. Key Responsibilities Notify donors that they have been identified as a potential stem cell match for a patient, to educate and consent donors and facilitate the management of the donors stem cell collections. Work within the Indian Workup team managing the coordination of peripheral blood stem cell donations between identified stem cell donors and one of DKMS-Foundation Indias stem cell collection centres. Carry out information sessions with identified donors to ensure they are fully and adequately prepared for stem cell donation and collection (paying particular attention to ensure that the donor gives informed consent). Evaluate donors for medical and non-medical factors affecting suitability and eligibility using guidelines set by DKMS-Foundation India and national regulations, including referring for further medical assessments where required. Schedule and manage donor medical assessments, working closely with the patients team and medical advisors to facilitate donor medical clearance. Responsible for communicating with national and international transplant physicians and managers to complete the required documentation within designated timelines. Work with DKMS-Foundation India internal databases to record and document accurate donor case notes. Communicate and liaise closely with specialist courier companies to schedule national and international transport of stem cell products. National travel will be required to meet with donors and facilitate managing the donors stem cell collection at DKMS-Foundation India stem cell collection centers. Manage the relationships with DKMS-Foundation India stem cell collection centers, ensuring they adhere to DKMS global standards. This includes regular conference calls and face-to-face meetings. Work closely with DKMS-Foundation India finance team to ensure all supplier invoices are paid on time and contract terms are met. Represent the Donor Request Management team at local and international DKMS working groups, inputting into organizational projects, as required. Work closely with the DKMS international medical team and other DKMS organisations in other countries to contribute to the quality and efficiency of DKMS-BMST policies and processes. Respond to and investigate any quality incidents and adverse events, including providing recommendations for corrective and preventative actions. Work with the Head of Donor Request Management to resolve any donor and transplant center complaints and respond as necessary. Ensure compliance with all medical/health-related standards, policies, procedures, and documentation requirements set by DKMS-BMST, registries, and regulatory authorities such as the National Blood Transfusion Council (NBTC). To keep up-to-date with developments and learning in the field of stem cell donation and comply with the organizations health and safety, confidentiality, data protection, and other policies. To participate in staff training, organization/team meetings, and events, as required. Perform any additional tasks, as requested by the Line Manager /Head of Department which may be required from time to time. Key Skills Excellent written and verbal communication skills. Excellent interpersonal skills, including active listening skills. Strong attention to detail and accuracy, and be able to multitask efficiently Ability to work to and achieve clear targets and deadlines. Very proactive work ethic and ability to work on own initiative. Flexible approach with the ability to adapt to new and changing situations. Good IT skills, including the use of MS Office (Outlook, Word, Excel, and PowerPoint). Ability to work with an understanding of donor and patient confidentiality. Strong passion for DKMS-BMSTs mission and values and a high degree of sensitivity and empathy Zeal to work with an NGO for a Lifesaving cause Qualifications: Any Bachelors Degree (preferably in healthcare background) >2 years of work experience in any field. Previous experience working with large databases and electronic medical record (EMR) systems is preferred
Posted 3 months ago
12 - 16 years
40 - 45 Lacs
Gurgaon
Work from Office
Overview Enterprise Data Operations Assoc Manager Job Overview: As Data Modelling Assoc Manager, you will be the key technical expert overseeing data modeling and drive a strong vision for how data modelling can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data modelers who create data models for deploying in Data Foundation layer and ingesting data from various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data modelling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . You will independently be analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be a key technical expert performing all aspects of Data Modelling working closely with Data Governance, Data Engineering and Data Architects teams. You will provide technical guidance to junior members of the team as and when needed. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Responsibilities: Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Advocates existing Enterprise Data Design standards; assists in establishing and documenting new standards. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the data science team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Qualifications: 12+ years of overall technology experience that includes at least 6+ years of data modelling and systems architecture. 6+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 6+ years of experience developing enterprise data models. 6+ years in cloud data engineering experience in at least one cloud (Azure, AWS, GCP). 6+ years of experience with building solutions in the retail or in the supply chain space. Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models). Fluent with Azure cloud services. Azure Certification is a plus. Experience scaling and managing a team of 5+ data modelers Experience with integration of multi cloud services with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring, hiring and scaling data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Differentiating Competencies Required Ability to work with virtual teams (remote work locations); lead team of technical resources (employees and contractors) based in multiple locations across geographies Lead technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with business, understand sometimes ambiguous, needs, and translate to clear, aligned requirements Able to work independently with business partners to understand requirements quickly, perform analysis and lead the design review sessions. Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.
Posted 3 months ago
2 - 6 years
7 - 11 Lacs
Kolkata
Work from Office
Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills :AWS Glue
Posted 3 months ago
1 - 6 years
4 - 5 Lacs
Bengaluru
Work from Office
Ortho Coders • Assign ICD-10, CPT, HCPCS codes for orthopedic treatments, surgeries • Review, validate clinical documentation for coding accuracy • Ensure compliance, coding guidelines, payer policies • Conduct coding quality audits, error correction Required Candidate profile E&M IP/OP Coders • Assign E&M codes (CPT, ICD-10, HCPCS) for inpatient, outpatient • Review physician documentation for medical necessity and compliance • Adherence to CMS, AAPC, and AHIMA guidelines Perks and benefits Plus incentives and Perks
Posted 3 months ago
2 - 6 years
4 - 8 Lacs
Pune
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 3 months ago
2 - 6 years
4 - 8 Lacs
Pune
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 3 months ago
2 - 4 years
4 - 8 Lacs
Pune
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 3 months ago
2 - 4 years
4 - 6 Lacs
Bengaluru
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to haveFamiliarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 3 months ago
7 - 12 years
9 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language) Minimum 7.5 year(s) of experience is required Educational Qualification : Any graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Databricks Unified Data Analytics Platform. Your typical day will involve working with Python and utilizing your expertise in software development to deliver impactful solutions. The ideal candidate will work in a team environment that demands technical excellence, whose members are expected to hold each other accountable for the overall success of the end product. Focus for this team is on the delivery of innovative solutions to complex problems, but also with a mind to drive simplicity in refining and supporting of the solution by others About The Role : & Responsibilities: Be accountable for delivery of business functionality. Work on the AWS cloud to migrate/re-engineer data and applications from on premise to cloud. Responsible for engineering solutions conformant to enterprise standards, architecture, and technologies Provide technical expertise through a hands-on approach, developing solutions that automate testing between systems. Perform peer code reviews, merge requests and production releases. Implement design/functionality using Agile principles. Proven track record of quality software development and an ability to innovate outside of traditional architecture/software patterns when needed. A desire to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. Be entrepreneurial, business minded, ask smart questions, take risks, and champion new ideas. Take ownership and accountability. Experience Required:3 to 5 years of experience in application program development Experience Desired: Knowledge and/or experience with healthcare information domains. Documented experience in a business intelligence or analytic development role on a variety of large-scale projects. Documented experience working with databases larger than 5TB and excellent data analysis skills. Experience with TDD/BDD Experience working with SPARK and real time analytic frameworks Education and Training Required:Bachelor's degree in Engineering, Computer Science Primary Skills: PYTHON, Databricks, TERADATA, SQL, UNIX, ETL, Data Structures, Looker, Tableau, GIT, Jenkins, RESTful & GraphQL APIs. AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM Additional Skills: Ability to rapidly prototype and storyboard/wireframe development as part of application design. Write referenceable and modular code. Willingness to continuously learn & share learnings with others. Ability to communicate design processes, ideas, and solutions clearly and effectively to teams and clients. Ability to manipulate and transform large datasets efficiently. Excellent troubleshooting skills to root cause complex issues Qualifications Any graduation
Posted 3 months ago
3 - 5 years
5 - 8 Lacs
Pune
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have.
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Title: Principal Data Engineer (Associate Director) Department: ISS Reports To: Head of Data Platform - ISS Grade : 7 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our team and feel like youre part of something bigger. Department Description ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process. These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security Performance Optimization: Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving: Leadership experience in problem-solving and technical decision-making. Communication: Strong in strategic communication and stakeholder engagement. Project Management: Experienced in overseeing project lifecycles working with Project Managers to manage resources.
Posted 3 months ago
5 - 9 years
5 - 12 Lacs
Hyderabad
Work from Office
Skill/Competency Requirements: Need to have experience in Big Data technologies and support projects. Able to quickly analyze complex SQL queries and grasp underlying functionality and able to provide RCA Well experienced in working with Hive scripts Experienced in working on Big Data technologies like Hive, Hue, Oozie. Experienced in working on L2/L3 support
Posted 3 months ago
2 - 7 years
4 - 9 Lacs
Mumbai
Work from Office
- Project hands-on experience in AWS cloud services - Good knowledge of SQL and experience in working with databases like Oracle, MS SQL etc - Experience with AWS services such as S3, RDS, EMR, Redshift, Glue, Sagemaker, Dynamodb, Lambda,
Posted 3 months ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 3 months ago
10 - 20 years
15 - 30 Lacs
Hyderabad
Remote
We are looking for Medical Device Integration Lead / BMDI Lead (EHR/EMR) Company: Aurumdata Solutions Client: MNC (UK based company) Experience:10+years Location: Pan India / Saudi Arabia Work Type: Remote Job Mode: C2H Duration: 1+years + Ext Notes: Should have written & speak in the Arabic language Selected candidates need to travel to Saudi Arabia Selected candidates need to work Aurum's Payroll Key Skills: Medical Devices, HL7, Healthcare, Integration, EMR/EHR, Oracle, Care Aware, Cerner Job Summary: The Team Lead will be responsible for the end-to-end integration of Medical Devices with EHR / EMR platforms, primarily Cerner. This role requires an experienced implementor with strong leadership skills who can work across clients with different environments to meet customer needs and business goals. Basic qualifications: Bachelors Degree in Information Systems, Computer Science, Biomedical Engineering, or a related health informatics role. At least 3 years of healthcare solution work experience in the IT industry. Experience coordinating complex integration projects, including design, build, testing, and conversion phases Familiarity with Linux systems and proficiency in database queries. Strong presentation and facilitation skills. Excellent written and verbal communication skills. Strong analytical and troubleshooting skills. Experience with Cerner (Oracle Healthcare) platform or similar. Experience working with CareAware or similar platforms. Preferred qualifications: Manage the complex integration of technology, encompassing hardware, software, and services, in collaboration with solution consulting teams. Oversee the implementation, management, and maintenance of platforms for deployment and optimization projects. Provide dedicated support for regular status meetings and project events with clients, addressing technical aspects of deployment and optimization projects. Assess complex and highly impactful risks and drive the remediation and mitigation of findings throughout client engagements in partnership with engagement management. Cultivate and sustain long-term, strategic internal and external client relationships. Serve as the trusted advisor to the client, particularly from a device integration perspective. Develop processes for delivering new emerging technology, leading by example and sharing knowledge and experiences with associates and your team. Create a collaborative and respectful work environment where you advocate for your team, establish accountability, and recognize their accomplishments. Facilitate knowledge transfer sessions for junior associates, guiding them on the journey to becoming Subject Matter Experts (SMEs) in CareAware Solutions. Support the team in Pre-Sales initiatives throughout the initial sales cycle. Create comprehensive scope documentation and technical diagrams for Statements of Work (SOWs). If you are interested, kindly send your CVs to Balaram@aurumdatasolutions.com Thanks & Regards, Balaram K Mobile No: +91- 9000749410 / 9848771366 USA: +1 (346) 999 0801 Email: Balaram@aurumdatasolutions.com
Posted 3 months ago
12 - 20 years
27 - 42 Lacs
Trivandrum, Bengaluru, Hyderabad
Work from Office
Hiring For AWS Big Data Architect who can Join Immediately with one of our client. Role : Big Data Architect / AWS Big Data Architect Experience : 12+ Years Locations : Hyderabad , Bangalore , Gurugram, Kochi , Trivandrum Notice Period : Immediate Joiners Shift Timings : overlap with UK timings ( 2-11 PM IST) Notice Period : Immediate Joiners / Serving Notice with in 30 Days Required Skills & Qualifications : 12+ years of experience in Big Data architecture and engineering. Strong expertise in AWS (DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc.). Hands-on experience with Debezium and Kafka for real-time data streaming and synchronization. Proficiency in Spark optimization for batch processing improvements. Strong SQL and Oracle query optimization experience. Expertise in Big Data frameworks (Hadoop, Spark, Hive, Presto, Athena, etc.). Experience in CI/CD automation and integrating AWS services with DevOps pipelines. Strong problem-solving skills and ability to work in an Agile environment. Preferred Skills (Good to Have): • Experience with Dremio to Athena migrations. • Exposure to cloud-native DR solutions on AWS. • Strong analytical skills to document and implement performance improvements More details to Contact to me : 9000336401 Mail ID :chandana.n@kksoftwareassociates.com For More Job Alerts Please Do Follow : https://lnkd.in/gHMuPUXW
Posted 3 months ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 3 months ago
2 - 6 years
12 - 16 Lacs
Pune
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Bengaluru
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 3 months ago
2 - 4 years
4 - 6 Lacs
Kochi
Work from Office
Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has a growing demand for professionals skilled in EMR (Electronic Medical Records) due to the increasing digitalization of healthcare systems. EMR jobs in India offer a wide range of opportunities for job seekers looking to make a career in this field.
The average salary range for EMR professionals in India varies from ₹3-5 lakhs per annum for entry-level positions to ₹10-15 lakhs per annum for experienced professionals.
In the field of EMR, a typical career path may involve progressing from roles such as EMR Specialist or EMR Analyst to Senior EMR Consultant, and eventually to EMR Project Manager or EMR Director.
In addition to expertise in EMR systems, professionals in this field are often expected to have skills in healthcare data analysis, healthcare IT infrastructure, project management, and knowledge of healthcare regulations.
As you explore EMR jobs in India, remember to showcase your expertise in EMR systems, healthcare data management, and project management during interviews. Prepare confidently and stay updated with the latest trends in the field to enhance your career prospects. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2