Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Duration: 8Months Job Type: Contract Work Type: Onsite The top 3 Responsibilities: Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda, Glue, Apache Airflow etc. Build and deliver high quality data architecture and pipelines to support business analyst, data scientists, and customer reporting needs. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Leadership Principles: Ownership, Customer obsession, Dive Deep and Deliver results Mandatory requirements: 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL & SQL Tuning Basic to Mid-level proficiency in scripting with Python Education or Certification Requirements: Any Graduation
Posted 3 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Interested candidates can share their updated CV at: heena.ruchwani@gspann.com Join GSPANN Technologies as a Senior AWS Data Engineer and play a critical role in designing, building, and optimizing scalable data pipelines in the cloud. Were looking for an experienced engineer who can turn complex data into actionable insights using the AWS ecosystem. Key Responsibilities: Design, develop, and maintain scalable data pipelines on AWS. Work with large datasets to perform ETL/ELT transformations using tools like AWS Glue, EMR, and Lambda . Optimize and monitor data workflows , ensuring reliability and performance. Collaborate with data analysts, architects, and other engineers to build data solutions that support business needs. Implement and manage data lakes , data warehouses , and streaming architectures . Ensure data quality, governance, and security standards are met across platforms. Participate in code reviews , documentation, and mentoring of junior data engineers. Required Skills & Qualifications: 5+ years of experience in data engineering , with strong hands-on work in the AWS cloud ecosystem . Proficiency in Python , PySpark , and SQL . Strong experience with AWS services : AWS Glue , Lambda , EMR , S3 , Athena , Redshift , Kinesis , etc. Expertise in data pipeline development and workflow orchestration (e.g., Airflow , Step Functions ). Solid understanding of data warehousing and data lake architecture. Experience with CI/CD , version control (GitHub) , and DevOps practices for data environments. Familiarity with Snowflake , Databricks , or Looker is a plus. Excellent communication and problem-solving skills. Interested candidates can share their updated CV at: heena.ruchwani@gspann.com
Posted 3 weeks ago
8.0 - 10.0 years
8 - 12 Lacs
Pune
Work from Office
Role Purpose The role incumbent is focused on implementation of roadmaps for business process analysis, data analysis, diagnosis of gaps, business requirements & functional definitions, best practices application, meeting facilitation, and contributes to project planning. Consultants are expected to contribute to solution building for the client & practice. The role holder can handle higher scale and complexity compared to a Consultant profile and is more proactive in client interactions. Do Assumes responsibilities as the main client contact leading engagement w/ 10-20% support from Consulting & Client Partners. Develops, assesses, and validates a client’s business strategy, including industry and competitive positioning and strategic direction Develops solutions and services to suit client’s business strategy Estimates scope and liability for delivery of the end product/solution Seeks opportunities to develop revenue in existing and new areas Leads an engagement and oversees others’ contributions at a customer end, such that customer expectations are met or exceeded. Drives Proposal creation and presales activities for the engagement; new accounts Contributes towards the development of practice policies, procedures, frameworks etc. Guides less experienced team members in delivering solutions. Leads efforts towards building go-to-market/ off the shelf / point solutions and process smethodologies for reuse Creates reusable IP from managed projects Mandatory Skills: Telecom BSS NextGen Ops. Experience8-10 Years.
Posted 3 weeks ago
4.0 - 6.0 years
2 - 6 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Job Title:Sr AWS Data Engineer Experience4-6 Years Location:Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : PySpark, Python, SQL, AWS Services - S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch.
Posted 3 weeks ago
5.0 - 7.0 years
2 - 5 Lacs
Pune
Work from Office
Job Title:Data Engineer Experience5-7Years Location:Pune : Roles & Responsibilities: Create and maintain optimal data pipeline architecture Build data pipelines that transform raw, unstructured data into formats that data analyst can use to for analysis Assemble large, complex data sets that meet functional / non-functional business requirements Identify, design, and implement internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and delivery of data from a wide variety of data sources using SQL and AWS ‘Big Data’ technologies Work with stakeholders including the Executive, Product, and program teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems Develops and maintains scalable data pipelines and builds out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using HQL and 'Big Data' technologies Implements processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it Write unit/integration tests, contribute to engineering wiki, and document work Performs root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Who You Are: You’re passionate about Data and building efficient data pipelines You have excellent listening skills and empathetic to others You believe in simple and elegant solutions and give paramount importance to quality You have a track record of building fast, reliable, and high-quality data pipelines Passionate with good understanding of data, with a focus on having fun, while delivering incredible business results Must have skills: AData Engineerwith 5+ years of relevant experience who is excited to apply their current skills and to grow their knowledge base. A Data Engineer who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Has experience using the following software/tools: Experience with big data tools:Hadoop, Spark, Kafka, Hive etc. Experience with relationalSQLandNoSQL databases, including Postgres and Cassandra Experience withdata pipelineandworkflow management tools Experience with AWS cloud services:EC2, EMR, RDS, Redshift Experience with object-oriented/object function scripting languages:Python, Java, Scala, etc. Experience withAirflow/Ozzie Experience inAWS/Spark/Python development Experience inGIT, JIRA, Jenkins, Shell scripting Familiar withAgile methodology,test-driven development, source control management and automated testing Build processes supporting data transformation, data structures, metadata, dependencies and workload management Experience supporting and working with cross-functional teams in a dynamic environment Nice to have skills: Experience with stream-processing systems:Storm, Spark-Streaming, etc. a plus Experience withSnowflake
Posted 3 weeks ago
6.0 - 9.0 years
10 - 20 Lacs
Hyderabad
Work from Office
This requirement to source profiles with 6-9years of overall experience, including minimum of 4years in Data engineering. Added the below point based on observation Look for combinations with Informatica, IICS , Python,(If not Informatica we can submit with Talend,) Pyspark, SQL , Step Functions, Lambda & EMR on high level exp - Location Hyderabad Key responsibilities and accountabilities Design, build and maintain complex ELT/ETL jobs that deliver business value. Extract, transform and load data from various sources including databases, APIs, and flat files using IICS or Python/SQL. Translate high-level business requirements into technical specs Conduct unit testing, integration testing, and system testing of data integration solutions to ensure accuracy and quality Ingest data from disparate sources into the data lake and data warehouse Cleanse and enrich data and apply adequate data quality controls Provide technical expertise and guidance to team members on Informatica IICS/IDMC and data engineering best practices to guide the future development of MassMutuals Data Platform. Develop re-usable tools to help streamline the delivery of new projects Collaborate closely with other developers and provide mentorship Evaluate and recommend tools, technologies, processes and reference Architectures Work in an Agile development environment, attending daily stand-up meetings and delivering incremental improvements Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications and provide feedback on code quality, design, and performance Knowledge, skills and abilities Please refer ‘Education and Experience’ Education and experience Bachelor’s degree in computer science, engineering, or a related field. Master’s degree preferred. Data: 5+ years of experience with data analytics and data warehousing. Sound knowledge of data warehousing concepts. SQL: 5+ years of hands-on experience on SQL and query optimization for data pipelines. ELT/ETL: 5+ years of experience in Informatica/ 3+ years of experience in IICS/IDMC Migration Experience: Experience Informatica on prem to IICS/IDMC migration Cloud: 5+ years’ experience working in AWS cloud environment Python: 5+ years of hands-on experience of development with Python Workflow: 4+ years of experience in orchestration and scheduling tools (e.g. Apache Airflow) Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues Communication: Excellent communication, problem-solving and organizational and analytical skills Able to work independently and to provide leadership to small teams of developers. Reporting: Experience with data reporting (e.g. MicroStrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Experience in Design and Implementation of ETL solutions with effective design and optimized performance, ETL Development with industry standard recommendations for jobs recovery, fail over, logging, alerting mechanisms. Specify the minimum acceptable level of education, experience, certifications necessary for the role Application Requirements No special requirements Support Hours India GCC – US (EST) hours overlap for 2-3 hours
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Grade : 7 Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.
Posted 3 weeks ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 3 weeks ago
7.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Role Overview We are seeking an experienced Data Engineer with 7-10 years of experience to design, develop, and optimize data pipelines while integrating machine learning (ML) capabilities into production workflows. The ideal candidate will have a strong background in data engineering, big data technologies, cloud platforms, and ML model deployment. This role requires expertise in building scalable data architectures, processing large datasets, and supporting machine learning operations (MLOps) to enable data-driven decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and maintain scalable, robust, and efficient data pipelines for batch and real-time data processing. Build and optimize ETL/ELT workflows to extract, transform, and load structured and unstructured data from multiple sources. Work with distributed data processing frameworks like Apache Spark, Hadoop, or Dask for large-scale data processing. Ensure data integrity, quality, and security across the data pipelines. Implement data governance, cataloging, and lineage tracking using appropriate tools. Machine Learning Integration Collaborate with data scientists to deploy, monitor, and optimize ML models in production. Design and implement feature engineering pipelines to improve model performance. Build and maintain MLOps workflows, including model versioning, retraining, and performance tracking. Optimize ML model inference for low-latency and high-throughput applications. Work with ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and deployment tools like Kubeflow, MLflow, or SageMaker. Cloud & Big Data Technologies Architect and manage cloud-based data solutions using AWS, Azure, or GCP. Utilize serverless computing (AWS Lambda, Azure Functions) and containerization (Docker, Kubernetes) for scalable deployment. Work with data lakehouses (Delta Lake, Iceberg, Hudi) for efficient storage and retrieval. Database & Storage Management Design and optimize relational (PostgreSQL, MySQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Manage and optimize data warehouses (Snowflake, BigQuery, Redshift, Databricks) for analytical workloads. Implement data partitioning, indexing, and query optimizations for performance improvements. Collaboration & Best Practices Work closely with data scientists, software engineers, and DevOps teams to develop scalable and reusable data solutions. Implement CI/CD pipelines for automated testing, deployment, and monitoring of data workflows. Follow best practices in software engineering, data modeling, and documentation. Continuously improve the data infrastructure by researching and adopting new technologies. Required Skills & Qualifications Technical Skills: Programming Languages: Python, SQL, Scala, Java Big Data Technologies: Apache Spark, Hadoop, Dask, Kafka Cloud Platforms: AWS (Glue, S3, EMR, Lambda), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow) Data Warehousing: Snowflake, Redshift, BigQuery, Databricks Databases: PostgreSQL, MySQL, MongoDB, Cassandra ETL/ELT Tools: Airflow, dbt, Talend, Informatica Machine Learning Tools: MLflow, Kubeflow, TensorFlow, PyTorch, Scikit-learn MLOps & Model Deployment: Docker, Kubernetes, SageMaker, Vertex AI DevOps & CI/CD: Git, Jenkins, Terraform, CloudFormation Soft Skills: Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Ability to work in an agile and cross-functional team environment. Strong documentation and technical writing skills. Preferred Qualifications Experience with real-time streaming solutions like Apache Flink or Spark Streaming. Hands-on experience with vector databases and embeddings for ML-powered applications. Knowledge of data security, privacy, and compliance frameworks (GDPR, HIPAA). Experience with GraphQL and REST API development for data services. Understanding of LLMs and AI-driven data analytics.
Posted 3 weeks ago
6.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Role Big Data Engineer Work Location Bangalore (CV Ramen Nagar location) Experience 7+ Years Notice Period Immediate - 30 days Mandatory Skills Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 3 weeks ago
1.0 - 5.0 years
3 - 7 Lacs
Noida
Work from Office
Looking for a dynamic sales personnel to sell our ERP and EMR solutions. Requirements: 5+ years in sales especially selling software solution preferably ERP/EMR or others Experience in Manufacturing sectors for ERP and Hospitals/Clinics for EMR solution will be given preference Must be ready to travel all across NCR and north western states Good communication and computer skills Ability to close deals
Posted 3 weeks ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 3 weeks ago
4.0 - 9.0 years
9 - 19 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Greetings from HCL! Currently Hiring for "RIS PACS" JD: Familiar connect with radiologist Skill : RIS , VNA, Enterprise Imaging, HL7, DICOM , PACS Dictation/Speech Recognition software, Image Exchange software, Intelerad PACS , Fuji EIS Powerscribe360 Good knowledge of RISPACSDICOM/HL7 standards Experience - 3-12 years Location - Bangalore / Chennai / Noida / Pune / Hyderabad Notice period - Immediate to 30 days Only CTC - Can be discussed Interested candidate please share below details along with updated resume Candidate Name- contact Number- Email ID- Total Experience- Relevant Experience- Skill- Current company- Preferred location- Notice Period- Current CTC- Expected CTC- Interested candidate please drop mail to "kushmathattanda.baby@hcltech.com" Regards, Kushma kushmathattanda.baby@hcltech.com
Posted 3 weeks ago
2 - 5 years
3 - 5 Lacs
Kondapur
Work from Office
Join Our Team at Staffingly, Inc. Kondapur, Hyderabad Job Title: Trainer/Training Coordinator US Healthcare Location: Kondapur, Hyderabad Type: Full-Time, On-Site (NO REMOTE) Shift Timing: USA Shift (India Night): 6.30 PM IST - 3.30 AM IST Start Date: Immediate We appreciate the value of your time as well as ours, so please review the entire job description and apply only if you are interested in working at our office in Kondapur, Hyderabad. At Staffingly, Inc., we are at the forefront of revolutionizing healthcare operations by providing essential services to doctors, laboratories, pharmacies, and other healthcare providers. As a leader in economic Prior Authorization solutions, we tackle the challenges of staff shortages that impact revenue flow and patient care quality. Our mission is to empower healthcare facilities to focus on what truly mattersexceptional patient careby simplifying and streamlining their administrative processes. Our comprehensive service offerings include handling intricate Prior Authorization processes, accurate insurance verifications, expert management of medication and procedural authorizations, full-spectrum Revenue Cycle Management (RCM), Medical Billing/Coding, Data Entry, and Customer Support services. With 24/7 operations, we ensure efficiency and responsiveness, supporting our clients in maintaining smooth and effective healthcare delivery. If youre passionate about making a meaningful impact in the healthcare industry by improving operational efficiencies and enhancing patient care, Staffingly, Inc. is the place for you. We are eager to see how your skills and expertise can contribute to our growth and success. For more information, visit us at https://staffingly.com Join Staffingly, Inc.s WhatsApp Channel to receive the latest Job updates & tips: https://hie.li/kAC Position Overview: Important Note: Only applicants with relevant experience as specified in the job requirements should apply. This position demands specific skills and experience in Trainer/Training Coordinator US Healthcare . If your background does not align with these criteria, please consider other opportunities more suited to your qualifications. Job Summary Looking for a skilled Trainer / Coordinator with hands-on experience in US healthcare processes (RCM, billing, coding, claims, etc.) and system training (EMRs, healthcare platforms). Must also train staff on MS Office tools (Excel, Word, PPT) and ensure team readiness on tools, processes, and compliance. Key Responsibilities Train on US healthcare process: RCM, claims, billing, coding, eligibility, prior auth Conduct sessions on EMR/software usage (e.g., Athena, Kareo, eClinicalWorks, Advanced MD) Provide hands-on training for MS Excel, Word, PPT Ensure knowledge of HIPAA, PHI handling, compliance Create training docs, SOPs, assessments Conduct refreshers, floor support, nesting Track trainee progress, share reports Requirements 2 to 5 yrs exp. in US healthcare Strong knowledge of RCM workflows & healthcare terms Fluent in English ; good with presentations & communication Proficient in MS Office & basic tools Willing to work in US shift timings Benefits: Provident Fund contributions. Overtime and holiday pay. On-site benefits, including travel allowances and meals. Referral and birthday bonuses. Night shift allowances. Recognition through our "Employee of the Month" program. Please email your CVs to career@staffingly.in with the subject line "JOB APPLICATION Trainer/Training Coordinator US Healthcare . "
Posted 1 month ago
- 5 years
13 - 18 Lacs
Chennai
Work from Office
Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role: Service Desk Manager ? Band C1 – Role (Data Architect) Location – Chennai, Noida Total exp – 11+ Years The candidate must have overall 11+ years of experience in ETL and Data Warehouse of which 3-4 years on Hadoop platform and at least 2 year in Cloud Big Data Environment. Must have hands on experience on Hadoop services like HIVE/ Spark/ Scala /Sqoop Must have hands on in writing complex use case driven SQLs Should have about 3+ years of hands-on good knowledge of AWS Cloud and On-Prem related key services and concepts. Should have 3+ years of working experience with AWS Cloud tools like EMR, Redshift, Glue S3 Should have been involved in On-Prem to Cloud Migration process. Should have good knowledge with HIVE / Spark / Scala scripts Should have good knowledge on Unix Shell scripting Should be flexible to overlap US business hours Should be able to drive technical design on Cloud applications Should be able to guide & drive the team members for cloud implementations Should be well versed with the costing model and best practices of the services to be used for Data Processing Pipelines in Cloud Environment. AWS Certified applicants preferable ? ? ? Competencies Client Centricity Passion for Results Collaborative Working Problem Solving & Decision Making Effective communication Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 month ago
8 - 10 years
8 - 12 Lacs
Pune
Work from Office
About The Role Role Purpose The role incumbent is focused on implementation of roadmaps for business process analysis, data analysis, diagnosis of gaps, business requirements & functional definitions, best practices application, meeting facilitation, and contributes to project planning. Consultants are expected to contribute to solution building for the client & practice. The role holder can handle higher scale and complexity compared to a Consultant profile and is more proactive in client interactions. ? Do Assumes responsibilities as the main client contact leading engagement w/ 10-20% support from Consulting & Client Partners. Develops, assesses, and validates a client’s business strategy, including industry and competitive positioning and strategic direction Develops solutions and services to suit client’s business strategy Estimates scope and liability for delivery of the end product/solution Seeks opportunities to develop revenue in existing and new areas Leads an engagement and oversees others’ contributions at a customer end, such that customer expectations are met or exceeded. Drives Proposal creation and presales activities for the engagement; new accounts Contributes towards the development of practice policies, procedures, frameworks etc. Guides less experienced team members in delivering solutions. Leads efforts towards building go-to-market/ off the shelf / point solutions and process smethodologies for reuse Creates reusable IP from managed projects ? ? ? Mandatory Skills: Telecom BSS NextGen Ops. Experience8-10 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 month ago
1 - 3 years
2 - 5 Lacs
Chennai
Work from Office
Basic Section No. Of Openings 2 Grade 1B Designation SENIOR CODER Closing Date 21 May 2025 Organisational Country IN State TAMIL NADU City CHENNAI Location Chennai-I Skills Skill Medical Coding Healthcare HIPAA CPT ICD-9 EMR Medical Billing Healthcare Management Revenue Cycle ICD-10 Education Qualification No data available CERTIFICATION No data available About The Role Role Description Overview: Coder is accountable to manage day to day activities of coding the Patients chart & Diagnosis report. Responsibility Areas: Coding or auditing charts, based on requirements Updating/Clearing the production/pending reports To work closely with the team leader. To review emails for any updates Identify issues and escalate the same to the immediate supervisor Strict adherence to the company policies and procedures. Sound knowledge in Medical Coding concept. Should have 6 months to 3 Yrs of Coding Experience. Understand the client requirements and specifications of the project. Meet the productivity targets of clients within the stipulated time (Daily & Monthly) Applying the instructions/updates received from the client during production. Coding or auditing charts, based on requirements. Prepare and Maintain reports
Posted 1 month ago
4 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
Data Engineer | 4 to 6 years | Bengaluru Job description 4+ years of microservices development experience in two of thesePython, Java, Scala 4+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 4+ years of experience with Big Data TechnologiesApache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational DatabasesPostgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 2+ years of experience in Cloud technologiesAWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Primary Skills: Python, Java, Scala, data pipelines, Apache Spark, Hadoop, or Kafka , Postgres, MySQL, NoSQL Secondary Skills: Snowflake , Redshift ,Relation Data Modelling, Dimensional Data Modeling Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 1 month ago
3 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. About The Role - Grade Specific The primary focus is to help organizations design, develop, and optimize their data infrastructure and systems. They help organizations enhance data processes, and leverage data effectively to drive business outcomes. Skills (competencies) Industry Standard Data Modeling (FSLDM) Ab Initio Industry Standard Data Modeling (IBM FSDM)) Agile (Software Development Framework) Influencing Apache Hadoop Informatica IICS AWS Airflow Inmon methodology AWS Athena JavaScript AWS Code Pipeline Jenkins AWS EFS Kimball AWS EMR Linux - Redhat AWS Redshift Negotiation AWS S3 Netezza Azure ADLS Gen2 NewSQL Azure Data Factory Oracle Exadata Azure Data Lake Storage Performance Tuning Azure Databricks Perl Azure Event Hub Platform Update Management Azure Stream Analytics Project Management Azure Sunapse PySpark Bitbucket Python Change Management R Client Centricity RDD Optimization Collaboration SantOs Continuous Integration and Continuous Delivery (CI/CD) SaS Data Architecture Patterns Scala Spark Data Format Analysis Shell Script Data Governance Snowflake Data Modeling SPARK Data Validation SPARK Code Optimization Data Vault Modeling SQL Database Schema Design Stakeholder Management Decision-Making Sun Solaris DevOps Synapse Dimensional Modeling Talend GCP Big Table Teradata GCP BigQuery Time Management GCP Cloud Storage Ubuntu GCP DataFlow Vendor Management GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2
Posted 1 month ago
1 - 5 years
9 - 13 Lacs
Kolkata
Work from Office
Increasing digitalization and flexibility of production processes presents outstanding potential. In Digital Industries, we enable our customers to unlock their full potential and drive digital transformation with a unique portfolio of automation and digitalization technologies. From hardware to software to services, we"™ve got quite a lot to offer. How about you? We blur the boundaries between industry domains by integrating the virtual and physical, hardware and software, design and manufacturing worlds. With the rapid pace of innovation, digitalization is no longer tomorrow"™s idea. We take what the future promises tomorrow and make it real for our customers today. Join us - where your career meets tomorrow. Siemens EDA is a global technology leader in Electronic Design Automation software. Our software tools enable companies around the world to develop highly innovative electronic products faster and more efficiently. Our customers use our tools to push the boundaries of technology and physics to deliver better products in the increasingly complex world of chip, board, and system design. Questa Simulation Product It is a core R&D team working on multiple verticals of Simulation. A very energetic and enthusiastic team of motivated individuals. Responsibilities: We are looking for a highly motivated software engineer to work in the QuestaSim R&D team of the Siemens EDA Development responsibilities will include core algorithmic advances and software design/architecture. You will collaborate with a senior group of software engineers contributing to final production level quality of new components and algorithms and to create new engines and support existent code. Self-motivation, self-discipline and the ability to set personal goals and work consistently towards them in a dynamic environment will go far towards chipping in to your success. We Are Not Looking for Superheroes, Just Super Minds! We"™ve got quite a lot to offer. How about you? Required Experience: We seek a graduate with at least 2 years of relevant working experience with B.Tech or M.Tech in CSE/EE/ECE from a reputed engineering college. Proficiency of C/C++, algorithm and data structures. Compiler Concepts and Optimizations. Experience with UNIX and / or LINUX platforms is vital. Basic Digital Electronics Concepts We value your knowledge of Verilog, System Verilog, VHDL Experience in parallel algorithms, job distribution. Understanding of ML/AI algorithms and their implementation in data-driven tasks Exposure to Simulation or Formal based verification methodologies would be a plus! The person should be self-motivated and can work independently. Should be able to guide others, towards project completion. Good problem solving and analytical skills We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, variable pay, other benefits, pension, healthcare and actively support working from home. We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. #LI-EDA #LI-Hybrid
Posted 1 month ago
3 - 5 years
6 - 10 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer We are looking for a skilled Data Architect/Engineer with strong expertise in AWS and data lake solutions. If you"™re passionate about building scalable data platforms, this role is for you. Your responsibilities will include: Architect & Design Build scalable and efficient data solutions using AWS services like Glue, Redshift, S3, Kinesis (Apache Kafka), DynamoDB, Lambda, Glue Streaming ETL, and EMR. Real-Time Data Integration Integrate real-time data from multiple Siemens orgs into our central data lake. Data Lake Management Design and manage large-scale data lakes using S3, Glue, and Lake Formation. Data Transformation Apply transformations to ensure high-quality, analysis-ready data. Snowflake Integration Build and manage pipelines for Snowflake, using Iceberg tables for best performance and flexibility. Performance Tuning Optimize pipelines for speed, scalability, and cost-effectiveness. Security & Compliance Ensure all data solutions meet security standards and compliance guidelines. Team Collaboration Work closely with data engineers, scientists, and app developers to deliver full-stack data solutions. Monitoring & Troubleshooting Set up monitoring tools and quickly resolve pipeline issues when needed. You"™d describe yourself as: Experience 3+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Solid understanding of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers
Posted 1 month ago
5 - 10 years
12 - 22 Lacs
Gurugram
Hybrid
Role & responsibilities Skill - Python AI ML Location - Gurgaon Exp - 5+ Notice period - Immediate JD Key Responsibilities: Design, develop, and maintain robust, scalable Python applications and services. Work closely with data scientists, ML engineers, and product teams to integrate machine learning models into production systems. Develop and optimize AI-enabled applications, including RAG pipelines, vector search, or recommendation engines. Build APIs and backend services for real-time and batch AI/ML workflows. Contribute to infrastructure for data preprocessing, feature engineering, and inference pipelines. Ensure code quality, testing, and documentation across the lifecycle. Stay up to date with emerging trends in AI, including LLMs, embeddings, and neural search. Required Qualifications: 5+ years of professional experience in Python development. Strong understanding of data structures, algorithms, and software architecture principles. Experience with web frameworks (e.g., Flask, FastAPI, Django). Familiarity with RESTful APIs, microservices, and distributed systems. Exposure to machine learning workflows, including data ingestion, model serving, or model monitoring. Knowledge of version control systems (Git) and CI/CD pipelines. Ability to work in cross-functional teams and communicate technical concepts clearly. Preferred Qualifications: Experience with LLMs, RAG architectures, or vector databases (e.g., FAISS, Pinecone, Weaviate). Familiarity with ML frameworks such as TensorFlow, PyTorch, Scikit-learn, or LangChain. Understanding of prompt engineering, tokenization, embeddings, and NLP pipelines. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Exposure to containerization and orchestration tools (Docker, Kubernetes).
Posted 1 month ago
2 - 5 years
3 - 7 Lacs
Gurugram
Work from Office
Role Data Engineer Skills: Data Modeling:* Design and implement efficient data models, ensuring data accuracy and optimal performance. ETL Development:* Develop, maintain, and optimize ETL processes to extract, transform, and load data from various sources into our data warehouse. SQL Expertise:* Write complex SQL queries to extract, manipulate, and analyze data as needed. Python Development:* Develop and maintain Python scripts and applications to support data processing and automation. AWS Expertise:* Leverage your deep knowledge of AWS services, such as S3, Redshift, Glue, EMR, and Athena, to build and maintain data pipelines and infrastructure. Infrastructure as Code (IaC):* Experience with tools like Terraform or CloudFormation to automate the provisioning and management of AWS resources is a plus. Big Data Processing:* Knowledge of PySpark for big data processing and analysis is desirable. Source Code Management:* Utilize Git and GitHub for version control and collaboration on data engineering projects. Performance Optimization:* Identify and implement optimizations for data processing pipelines to enhance efficiency and reduce costs. Data Quality:* Implement data quality checks and validation procedures to maintain data integrity. Collaboration:* Work closely with data scientists, analysts, and other teams to understand data requirements and deliver high-quality data solutions. Documentation:* Maintain comprehensive documentation for all data engineering processes and projects.
Posted 1 month ago
3 - 7 years
5 - 10 Lacs
Thiruvananthapuram
Remote
Job Summary The Insurance Verification Manager will be responsible for overseeing the insurance verification process, ensuring timely and accurate verification of patient insurance eligibility and benefits. This role involves managing a team, optimizing workflows, and leveraging advanced features to enhance operational efficiency and patient experience. Key Responsibilities: Supervise and mentor the insurance verification team, setting performance goals and conducting regular evaluations. Provide training and support to team members on insurance verification tools and best practices. Ensure timely verification of patient insurance eligibility, benefits, coverage levels, exclusions, and limitations. Monitor and manage the verification process, addressing any discrepancies or issues promptly. Utilize efficient scheduling and patient list management techniques to prioritize verification tasks. Implement and maintain insurance templates to streamline data entry and reduce errors. Coordinate with other departments to ensure seamless integration of insurance verification with scheduling, billing, and patient care. Communicate with insurance providers to resolve verification issues and stay updated on policy changes. Generate and analyze reports on verification metrics, claim statuses, and aging balances. Ensure compliance with HIPAA and other regulatory requirements in all insurance verification activities. Qualification Bachelors degree in Healthcare Administration, Business Administration, or a related field. Minimum of 5 years of experience in insurance verification or healthcare revenue cycle management, with at least 2 years in a managerial or supervisory role. In-depth knowledge of insurance policies, eligibility criteria, coverage details, and claims processes. Proficiency in using insurance verification software, practice management systems, or related healthcare management tools. Strong understanding of HIPAA regulations and other healthcare compliance requirements. Excellent leadership and team management skills. Strong analytical and problem-solving skills, with the ability to make data-driven decisions. Effective communication and interpersonal skills, with the ability to coordinate with cross-functional teams. Proficiency in Microsoft Office Suite (Word, Excel, PowerPoint) and experience with data analysis tools. Professional certifications in healthcare management or medical billing and coding (e.g., CPC, AAPC, CHAM) are a plus. Willing to work in night shifts
Posted 1 month ago
2 - 7 years
2 - 4 Lacs
Mumbai, Mumbai Suburban, Navi Mumbai
Work from Office
CRM Executive uses CRM software to track interactions, maintain customer data, nurture leads, and deliver service to boost sales, loyalty, and repeat business in the jewellery industry, staying informed on products, trends, and preferences. Required Candidate profile Strong Communication Skills Customer Service Expertise CRM Proficiency Sales Acumen Jewellery Industry Knowledge Data Analysis Skills Proficiency in MS Office Suite Perks and benefits Annual Bonus, Leave Encashments, Over Time, Leaves
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has a growing demand for professionals skilled in EMR (Electronic Medical Records) due to the increasing digitalization of healthcare systems. EMR jobs in India offer a wide range of opportunities for job seekers looking to make a career in this field.
The average salary range for EMR professionals in India varies from ₹3-5 lakhs per annum for entry-level positions to ₹10-15 lakhs per annum for experienced professionals.
In the field of EMR, a typical career path may involve progressing from roles such as EMR Specialist or EMR Analyst to Senior EMR Consultant, and eventually to EMR Project Manager or EMR Director.
In addition to expertise in EMR systems, professionals in this field are often expected to have skills in healthcare data analysis, healthcare IT infrastructure, project management, and knowledge of healthcare regulations.
As you explore EMR jobs in India, remember to showcase your expertise in EMR systems, healthcare data management, and project management during interviews. Prepare confidently and stay updated with the latest trends in the field to enhance your career prospects. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2