Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
coimbatore, tamil nadu
On-site
We are looking for a highly skilled and motivated Senior Technical Analyst to become a valuable part of our team. In this role, you will need to possess a combination of business acumen, data expertise, and technical proficiency to contribute to the development of scalable data-driven products and solutions. The ideal candidate will act as a bridge between business stakeholders and the technical team, ensuring the delivery of robust, scalable, and actionable data solutions. Your key responsibilities will include analyzing and critically evaluating client-provided technical and functional requirements, collaborating with stakeholders to identify gaps and areas needing clarification, and aligning business objectives with data capabilities. Additionally, you will be expected to contribute to defining and prioritizing product features in collaboration with technical architects and cross-functional teams, conduct data validation and exploratory analysis, and develop detailed user stories and acceptance criteria to guide development teams. As a Senior Technical Analyst, you will also be responsible for conducting user acceptance testing, ensuring solutions meet performance and security requirements, and serving as the primary interface between clients, vendors, and internal teams throughout the project lifecycle. Furthermore, you will guide cross-functional teams, collaborate with onsite team members, and drive accountability to ensure deliverables meet quality standards and timelines. To be successful in this role, you should have a Bachelor's degree in computer science, information technology, business administration, or a related field, with a Master's degree preferred. You should also have 4-5 years of experience managing technology-driven projects, with at least 3 years in a Technical Business Analyst or equivalent role. Strong experience in SQL, data modeling, and data analysis, as well as hands-on knowledge of Cloud Platforms with a focus on data engineering solutions, is essential. Your familiarity with APIs, data pipelines, workflow orchestration, and automation, along with a deep understanding of Agile/Scrum methodologies and experience with Agile tools, will be beneficial. Exceptional problem-solving, critical-thinking, decision-making, communication, presentation, and stakeholder management abilities are also key skills required for this role. This is a full-time permanent position located at DGS India - Pune - Kharadi EON Free Zone under the brand Merkle. If you are looking for a challenging role where you can contribute to the development of innovative data-driven solutions, we would love to hear from you.,
Posted 2 days ago
2.0 - 6.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Our Company Changing the world through digital experiences is what Adobes all about We give everyone?from emerging artists to global brands?everything they need to design and deliver exceptional digital experiences! Were passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen, Were on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! #FireflyGenAI The engineer will be part of a team working on the development, operations and support of Adobes AI Platform team They will be responsible for the design, architecture and development of new features and maintenance of existing features They will also handle all phases of development, from early specs and definition to release They are encouraged to be hands-on problem solver and well conversant in analyzing, architecting and implementing Golang/python-based world class high-quality software Prior experience on ML solutions and cloud platform services, workflow orchestrators, data pipeline solutions would be a plus, What You'll Do This is an individual contributor position, Hands on product/solution development knowledge are a must, The position involves conceptualization of a product, design, development, debugging/triaging, deployment at scale, monitoring, analyzing, etc Planning, effort estimation and risk analysis of a project, The incumbent will plan, evaluate industry alternatives, design and drive new components, solutions, workflow, features, etc Should take the initiative to drive frugality through optimizations without compromising stability or resiliency, Requirements Bachelors / Masters degree in engineering, 12+ years of relevant industry experience, 3+ years of experience as a lead/architect, A proven expertise with building large scale platforms on Kubernetes, Proven programming skills with languages such as python and go-lang, Experience of the latest ML development tools, Track record of delivering cloud-scale, data-driven products, and services that are widely adopted with large customer bases Exposure to container runtime environments Experience in building, deploying, and managing infrastructures in public clouds (specifically AWS) Adobe is proud to be an Equal Employment Opportunity employer We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law Learn more about our vision here, Adobe aims to make Adobe accessible to any and all users If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe or call (408) 536-3015, Show
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
About StatusNeo: At StatusNeo, we are committed to redefining the way businesses operate. As a leader in digital transformation, we leverage cutting-edge technologies and innovative strategies to empower organizations around the globe. Our partnerships with industry giants and our commitment to continuous learning and improvement provide an unparalleled platform for professional growth. Embrace a career at StatusNeo, where we value diversity, inclusivity and foster a hybrid work culture. Role: Data Engineer Location: Gurugram Key experience: - 3+ years of experience with AWS services including SQS, S3, Step Functions, EFS, Lambda, and OpenSearch. - Strong experience in API integrations, including experience working with large-scale API endpoints. - Proficiency in PySpark for data processing and parallelism in large-scale ingestion pipelines. - Experience with AWS OpenSearch APIs for managing search indices. - Terraform expertise for automating and managing cloud infrastructure. - Hands-on experience with AWS SageMaker, including working with machine learning models and endpoints. - Strong understanding of data flow architectures, document stores, and journal-based systems. - Experience in parallelizing data processing workflows to meet strict performance and SLA requirements. - Familiarity with AWS tools like CloudWatch for monitoring pipeline performance. Additional Preferred Qualifications: - Strong problem-solving and debugging skills in distributed systems. - Prior experience in optimizing ingestion pipelines with a focus on cost-efficiency and scalability. - Solid understanding of distributed data processing and workflow orchestration in AWS environments. Soft Skills: - Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. - Ability to work in a fast-paced environment and deliver high-quality results under tight deadlines. - Analytical mindset, with a focus on performance optimization and continuous improvement.,
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead Cloud Engineer at our organization, you will be responsible for designing and building cloud-based distributed systems to address complex business challenges for some of the world's largest companies. Leveraging your expertise in software engineering, cloud engineering, and DevOps, you will craft technology stacks and platform components that empower cross-functional AI Engineering teams to develop robust, observable, and scalable solutions. Working as part of a diverse and globally distributed engineering team, you will actively engage in the complete engineering life cycle, encompassing the design, development, optimization, and deployment of solutions and infrastructure at a scale that matches the world's leading companies. Your core responsibilities will include: - Architecting cloud solutions and distributed systems for full-stack AI software and data solutions - Implementing, testing, and managing Infrastructure as Code (IAC) for cloud-based solutions, covering areas such as CI/CD, data integrations, APIs, web and mobile apps, and AI solutions - Defining and implementing scalable, observable, manageable, and self-healing cloud-based solutions across AWS, Google Cloud, and Azure - Collaborating with diverse teams, including product managers, data scientists, and other engineers, to deliver analytics and AI features that align with business requirements and user needs - Utilizing Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in the cloud, ensuring optimal performance and availability - Developing and maintaining APIs and microservices to expose analytics functionality to internal and external consumers while adhering to best practices for API design and documentation - Implementing robust security measures to safeguard sensitive data and ensure compliance with data privacy regulations and organizational policies - Monitoring and troubleshooting application performance continuously to identify and resolve issues affecting system reliability, latency, and user experience - Participating in code reviews and contributing to the establishment and enforcement of coding standards and best practices to uphold the quality and maintainability of the codebase - Staying abreast of emerging trends and technologies in cloud computing, data analytics, and software engineering to identify opportunities for enhancing the analytics platform's capabilities - Collaborating closely with business consulting staff and leaders to assess opportunities and develop analytics solutions for clients across various sectors To excel in this role, you should possess the following qualifications: - A Master's degree in Computer Science, Engineering, or a related technical field - At least 6 years of experience, with a minimum of 3 years at the Staff level or equivalent - Proven experience as a cloud engineer and software engineer in product engineering or professional services organizations - Experience in designing and delivering cloud-based distributed solutions, with certifications in GCP, AWS, or Azure considered advantageous - Proficiency in building infrastructure as code using tools such as Terraform (preferred), Cloud Formation, Pulumi, AWS CDK, or CDKTF - Familiarity with software development lifecycle nuances - Experience with configuration management tools like Ansible, Salt, Puppet, or Chef - Proficiency in monitoring and analytics platforms such as Grafana, Prometheus, Splunk, SumoLogic, NewRelic, DataDog, CloudWatch, or Nagios/Icinga - Expertise in CI/CD deployment pipelines (e.g., Github Actions, Jenkins, Travis CI, Gitlab CI, Circle CI) - Hands-on experience in building backend APIs, services, and integrations using Python - Practical experience with Kubernetes through services like GKE, EKS, or AKS considered a plus - Ability to collaborate effectively with internal and client teams and stakeholders - Proficiency in using Git for versioning and collaboration - Exposure to LLMs, Prompt engineering, Langchain considered advantageous - Experience with workflow orchestration tools such as dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or others - Proficiency in implementing large-scale structured or unstructured databases, orchestration, and container technologies like Docker or Kubernetes - Strong interpersonal and communication skills to articulate and discuss complex engineering concepts with colleagues and clients from diverse disciplines - Display curiosity, proactivity, and critical thinking in problem-solving - Solid foundation in computer science principles related to data structures, algorithms, automated testing, object-oriented programming, performance complexity, and the impact of computer architecture on software performance - Knowledge of designing API interfaces and data architecture, database schema design, and database scalability - Familiarity with Agile development methodologies If you are seeking a dynamic and challenging opportunity to contribute to cutting-edge projects and collaborate with a diverse team of experts, we invite you to join us at Bain & Company. As a global consultancy dedicated to partnering with change makers worldwide, we are committed to achieving extraordinary results, outperforming the competition, and reshaping industries. With a focus on delivering tailored, integrated solutions and leveraging a network of digital innovators, we strive to drive superior outcomes that endure. Our ongoing investment in pro bono services underscores our dedication to supporting organizations addressing pressing issues in education, racial equity, social justice, economic development, and the environment. Recognized with a platinum rating from EcoVadis, we are positioned in the top 1% of all companies for our environmental, social, and ethical performance. Since our inception in 1973, we measure our success by the success of our clients and maintain the highest level of client advocacy in the industry.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You have a minimum of 3 years of experience working with various AWS services such as SQS, S3, Step Functions, EFS, Lambda, and OpenSearch. Your role will involve handling API integrations, particularly with large-scale endpoints. Proficiency in PySpark is required for data processing and parallelism within large-scale ingestion pipelines. Additionally, you should be familiar with AWS OpenSearch APIs for managing search indices. Your responsibilities will include utilizing Terraform expertise to automate and manage cloud infrastructure. Hands-on experience with AWS SageMaker is necessary, including working with machine learning models and endpoints. A strong understanding of data flow architectures, document stores, and journal-based systems is expected from you. Experience in parallelizing data processing workflows to meet performance and SLA requirements is essential. Familiarity with AWS tools like CloudWatch for monitoring pipelines is preferred. You should also possess strong problem-solving and debugging skills within distributed systems. Prior experience in optimizing ingestion pipelines for cost-efficiency and scalability is an advantage, along with a solid understanding of distributed data processing and workflow orchestration in AWS environments. In terms of soft skills, effective communication and collaboration skills are necessary for seamless teamwork across different functions. The ability to thrive in a fast-paced environment and deliver high-quality results within tight deadlines is crucial. An analytical mindset focused on performance optimization and continuous improvement will be beneficial in this role.,
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Senior Developer specializing in SnapLogic and Apache Airflow, you will be responsible for designing, developing, and maintaining enterprise-level data integration solutions. Your expertise in ETL development, workflow orchestration, and cloud technologies will be crucial for automating data workflows, optimizing performance, and ensuring the reliability and scalability of data systems. Your key responsibilities will include designing, developing, and managing ETL pipelines using SnapLogic to ensure efficient data transformation and integration across various systems and applications. You will leverage Apache Airflow for workflow automation, job scheduling, and task dependencies to ensure optimized execution and monitoring. Collaboration with cross-functional teams such as Data Engineering, DevOps, and Data Science will be essential to understand data requirements and deliver effective solutions. In this role, you will be involved in designing and implementing data pipeline architectures to support large-scale data processing in cloud environments like AWS, Azure, and GCP. Developing reusable SnapLogic pipelines, integrating with third-party applications and data sources, optimizing pipeline performance, and providing guidance to junior developers will be part of your responsibilities. Additionally, troubleshooting pipeline failures, implementing automated testing, continuous integration (CI), and continuous delivery (CD) practices for data pipelines will be crucial for maintaining high data quality and minimal downtime. The required skills and experience for this role include at least 6 years of hands-on experience in data engineering with a focus on SnapLogic and Apache Airflow. Proficiency in SnapLogic Designer, SnapLogic cloud environment, and Apache Airflow for building data integrations and ETL pipelines is essential. You should have a strong understanding of ETL concepts, data integration, cloud platforms like AWS, Azure, or Google Cloud, data storage systems such as S3, Azure Blob, and Google Cloud Storage, as well as experience with SQL, relational databases, NoSQL databases, REST APIs, and CI/CD pipelines. Your problem-solving skills, ability to work in an Agile development environment, and strong communication and collaboration skills will be valuable assets in this role. By staying current with new SnapLogic features, Airflow upgrades, and industry best practices, you will contribute to the continuous improvement of data integration solutions. Join our team at Virtusa, where teamwork, quality of life, and professional development are values we embody. Be part of a global team that cares about your growth and provides exciting projects, opportunities, and exposure to state-of-the-art technologies throughout your career with us. At Virtusa, great minds come together to nurture new ideas and foster excellence in a dynamic environment.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining Salesforce, the Customer Company, known for inspiring the future of business by combining AI, data, and CRM technologies. As part of the Marketing AI/ML Algorithms and Applications team, you will play a crucial role in enhancing Salesforce's marketing initiatives by implementing cutting-edge machine learning solutions. Your work will directly impact the effectiveness of marketing efforts, contributing to Salesforce's growth and innovation in the CRM and Agentic enterprise space. In the position of Lead / Staff Machine Learning Engineer, you will be responsible for developing and deploying ML model pipelines that drive marketing performance and deliver customer value. Working closely with cross-functional teams, you will lead the design, implementation, and operations of end-to-end ML solutions at scale. Your role will involve establishing best practices, mentoring junior engineers, and ensuring the team remains at the forefront of ML innovation. Key Responsibilities: - Define and drive the technical ML strategy, emphasizing robust model architectures and MLOps practices - Lead end-to-end ML pipeline development, focusing on automated retraining workflows and model optimization - Implement infrastructure-as-code, CI/CD pipelines, and MLOps automation for model monitoring and drift detection - Own the MLOps lifecycle, including model governance, testing standards, and incident response for production ML systems - Establish engineering standards for model deployment, testing, version control, and code quality - Design and implement monitoring solutions for model performance, data quality, and system health - Collaborate with cross-functional teams to deliver scalable ML solutions with measurable impact - Provide technical leadership in ML engineering best practices and mentor junior engineers in MLOps principles Position Requirements: - 8+ years of experience in building and deploying ML model pipelines with a focus on marketing - Expertise in AWS services, particularly SageMaker and MLflow, for ML experiment tracking and lifecycle management - Proficiency in containerization, workflow orchestration, Python programming, ML frameworks, and software engineering best practices - Experience with MLOps practices, feature engineering, feature store implementations, and big data technologies - Track record of leading ML initiatives with measurable marketing impact and strong collaboration skills Join us at Salesforce to drive transformative business impact and shape the future of customer engagement through innovative AI solutions.,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead Cloud Engineer at our organization, you will have the opportunity to design and build cloud-based distributed systems that address complex business challenges for some of the world's largest companies. Leveraging your expertise in software engineering, cloud engineering, and DevOps, you will play a crucial role in developing technology stacks and platform components that empower cross-functional AI Engineering teams to deliver robust, observable, and scalable solutions. Working within a diverse and globally distributed engineering team, you will engage in the complete engineering lifecycle, including designing, developing, optimizing, and deploying solutions and infrastructure at a scale that matches the world's leading companies. Your core responsibilities will involve: - Designing cloud solutions and distributed systems architecture for full-stack AI software and data solutions - Implementing, testing, and managing Infrastructure as Code (IAC) for cloud-based solutions, encompassing CI/CD, data integrations, APIs, web and mobile apps, and AI solutions - Defining and implementing scalable, observable, manageable, and self-healing cloud-based solutions across AWS, Google Cloud, and Azure - Collaborating closely with cross-functional teams to define and implement analytics and AI features that align with business requirements and user needs - Utilizing Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability - Developing and maintaining APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation - Implementing robust security measures to safeguard sensitive data and ensure compliance with data privacy regulations and organizational policies - Continuously monitoring and troubleshooting application performance to identify and resolve issues affecting system reliability, latency, and user experience - Participating in code reviews and contributing to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code - Keeping abreast of emerging trends and technologies in cloud computing, data analytics, and software engineering to identify opportunities for enhancing the analytics platform - Collaborating with business consulting staff and leaders to assess opportunities and develop analytics solutions for clients across various sectors To excel in this role, you should possess: - A Master's degree in Computer Science, Engineering, or a related technical field - 6+ years of experience, with a minimum of 3+ years at the Staff level or equivalent - Proven experience as a cloud engineer and software engineer in product engineering or professional services organizations - Experience designing and delivering cloud-based distributed solutions, with GCP, AWS, or Azure certifications considered advantageous - Proficiency in building infrastructure as code using tools such as Terraform, Cloud Formation, Pulumi, AWS CDK, or CDKTF - Deep familiarity with the software development lifecycle and configuration management tools - Experience with monitoring and analytics platforms, CI/CD deployment pipelines, backend APIs, Kubernetes, Git, and workflow orchestration - Strong interpersonal and communication skills, along with strong computer science fundamentals Join our team at Bain & Company, a global consultancy dedicated to partnering with change makers worldwide to shape the future and achieve extraordinary results.,
Posted 1 week ago
1.0 - 3.0 years
1 - 2 Lacs
Noida, Gurugram
Work from Office
R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work for 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Role Objective : To bill out medical accounts with accuracy within defined timelines and reduce rejections for payers. Process Accounts accurately basis US medical billing within defined TAT Able to process payer rejection with accuracy within defined TAT. 24*7 Environment, Open for night shifts Good analytical skills and proficiency with MS Word, Excel, and PowerPoint Interview Details: Interview Mode: Face-to-Face Interview Walk-in Day : Monday to Friday Walk in Timings :12 PM to 3 PM Walk in Address: Tower 1, 2nd Floor Candor tech space, sector 48 Tikri, Gurugram HR : Abhishek Tanwar 9971338456 / atanwar712@r1rcm.com Qualifications: Graduate in any discipline from a recognized educational institute. Good analytical skills and proficiency with MS Word, Excel, and PowerPoint. Good communication Skills (both written & verbal) Skill Set: Candidate should have good healthcare knowledge. Candidate should have knowledge of Medicare and Medicaid. Ability to interact positively with team members, peer group and seniors. Benefits and Amenities: 5 days working. Both Side Transport Facility and Meal. Apart from development, and engagement programs, R1 offers transportation facility to all its employees. There is specific focus on female security who work round-the-clock, be it in office premises or transport/ cab services. There is 24x7 medical support available at all office locations and R1 provides Mediclaim insurance for you and your dependents. All R1 employees are covered under term-life insurance and personal accidental insurance.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Apply Digital is a global digital transformation partner specializing in Business Transformation Strategy, Product Design & Development, Commerce, Platform Engineering, Data Intelligence, and Change Management. Our mission is to assist change agents in modernizing their organizations and delivering impactful results to their business and customers. Whether our clients are at the beginning, accelerating, or optimizing stage, we help them integrate composable technology as part of their digital transformation journey. Leveraging our extensive experience in developing intelligent products and utilizing AI tools, we drive value for our clients. Founded in 2016 in Vancouver, Canada, Apply Digital has expanded to nine cities across North America, South America, the UK, and Europe. We are excited to announce the launch of a new office in Delhi NCR, India, as part of our ongoing expansion. At Apply Digital, we advocate for a "One Team" approach, where we operate within a "pod" structure that combines senior leadership, subject matter experts, and cross-functional skill sets. This structure is supported by agile methodologies like scrum and sprint cadences, ensuring seamless collaboration and progress towards desired outcomes. Our team embodies our SHAPE values (smart, humble, active, positive, and excellent) to create a safe, empowered, respectful, and enjoyable work environment where everyone can connect, grow, and make a difference together. Apply Digital is a hybrid-friendly organization with remote work options available. The preferred candidate for this position should be based in or near the Delhi/NCR region of India, with working hours overlapping with the Eastern Standard Timezone (EST). The ideal candidate for this role will be responsible for designing, building, and maintaining scalable data pipelines and architectures to support analytical and operational workloads. Key responsibilities include optimizing ETL/ELT pipelines, integrating data pipelines into cloud-native applications, managing cloud data warehouses, ensuring data governance and security, collaborating with analytics teams, and maintaining data documentation. The candidate should have strong proficiency in English, experience in data engineering, expertise in SQL and Python, familiarity with cloud data platforms, and knowledge of ETL/ELT frameworks and workflow orchestration tools. Apply Digital offers a comprehensive benefits package that includes private healthcare coverage, Provident fund contributions, and a gratuity bonus after five years of service. We prioritize work-life balance with flexible personal time off policies and provide opportunities for skill development through training budgets, certifications, workshops, mentorship, and peer support. Apply Digital is committed to fostering an inclusive workplace where individual differences are celebrated, and equal opportunities are provided to all team members.,
Posted 1 week ago
4.0 - 7.0 years
5 - 15 Lacs
Bengaluru
Hybrid
Position: Senior Boomi Integration Developer Experience: 4 to 6 Years Location: Hybrid, Kalyan Nagar, Bangalore Employment Type: Full-Time Job Overview: We are seeking a highly skilled Senior Boomi Integration Developer with 46 years of hands-on experience in designing, building, and maintaining integration solutions using Boomi Atmosphere . The ideal candidate will bring deep integration knowledge, strong technical skills, and a proactive mindset to support digital transformation initiatives. Key Responsibilities: Design and develop robust, scalable integration solutions using Boomi . Lead end-to-end integration lifecycle from requirements gathering to deployment and support. Implement and manage Boomi components : connectors, processes, maps, business rules, and exception handling. Collaborate with functional teams to understand business requirements and translate them into technical solutions. Optimize and troubleshoot existing integrations to ensure performance and reliability. Define and maintain integration standards, documentation, and best practices. Provide mentorship or guidance to junior developers when required. Work across multiple applications and platforms (e.g., Salesforce, NetSuite, SAP, Workday, custom APIs). Required Skills: 4-6 years of experience in Boomi development . Expertise in REST/SOAP web services, XML, JSON, and flat file integrations. Strong understanding of integration patterns and enterprise architecture. Proven ability to manage complex integration scenarios including real-time and batch processing. Experience with API management, data transformation, and workflow orchestration. Proficient in error handling, logging, and performance tuning of Boomi processes. Basic scripting knowledge in Groovy or JavaScript is a plus. Preferred Qualifications: Boomi Developer Certification (preferred or in progress). Experience with CI/CD, version control (Git), and DevOps practices. Familiarity with other iPaaS platforms or middleware tools is a plus. Strong analytical, problem-solving, and communication skills. Ability to work independently and in cross-functional teams.
Posted 1 week ago
5.0 - 10.0 years
12 - 24 Lacs
Pune
Work from Office
Responsibilities: * Design, develop, and maintain workflow automations using Veeva Vault, Camunda, Power Automate, REST APIs, BPMN, JavaScript, Python, and troubleshoot issues. Share CV on recruitment@fortitudecareer.com and Mention Job Title Flexi working Work from home
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
You will be joining a renowned consulting firm known for being consistently ranked as one of the world's best places to work. The company has maintained a top position on Glassdoor's Best Places to Work list since 2009, emphasizing the importance of extraordinary teams in their business strategy. By intentionally bringing together diverse backgrounds, cultures, experiences, perspectives, and skills in a supportive and inclusive work environment, they ensure that every individual can thrive both professionally and personally. As part of the Application Engineering experts team within the AI, Insights & Solutions division, you will collaborate with a multidisciplinary group of professionals including analytics, engineering, product management, and design experts. Your role will involve leveraging deep technical expertise along with business acumen to assist clients in addressing their most transformative challenges. Working in integrated teams, you will develop data-driven strategies and innovative solutions to drive competitive advantage for clients by harnessing the power of data and artificial intelligence. Your responsibilities will include designing, developing, and maintaining cloud-based AI applications using a full-stack technology stack to deliver high-quality, scalable, and secure solutions. You will collaborate with cross-functional teams to define and implement analytics features, utilize Kubernetes and containerization technologies for deployment, develop APIs and microservices, ensure robust security measures, monitor application performance, contribute to coding standards, stay updated on emerging technologies, automate deployment processes, and collaborate closely with clients to assess opportunities and develop analytics solutions. To qualify for this position, you are required to have a Master's degree in Computer Science, Engineering, or a related technical field, along with at least 6 years of experience at a Senior or Staff level. Proficiency in client-side and server-side technologies, cloud platforms, Python, Git, DevOps, CI/CD, and various other technical skills is necessary. Additionally, strong interpersonal and communication skills, curiosity, proactivity, critical thinking, and a solid foundation in computer science fundamentals are essential for this role. This role also requires a willingness to travel up to 30% of the time. If you are looking for an opportunity to work in a collaborative and supportive environment, continuously learn and grow, and contribute to developing cutting-edge analytics solutions for clients across different sectors, this position may be the perfect fit for you.,
Posted 2 weeks ago
4.0 - 6.0 years
15 - 25 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Data Engineer with min 4+ years of experience strong with any cloud platform, ETL tools and scripting languages with atleast one basic cloud certification. Contact\Whatsapp: +919985831110 \ prashanth@livecjobs.com *JOB IN BANGALORE, PUNE, MUMBAI* Required Candidate profile Experience in any AWS, Azure, GCP SQL Any ETL tool Python or UNIX shell scripting Certification: Any basic Cloud (AWS, Azure, GCP) Certification
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead Cloud Engineer at our organization, you will have the opportunity to design and build cloud-based distributed systems that address complex business challenges for some of the world's largest companies. Drawing upon your expertise in software engineering, cloud engineering, and DevOps, you will play a pivotal role in crafting technology stacks and platform components that empower cross-functional AI Engineering teams to develop robust, observable, and scalable solutions. Working within a diverse and globally distributed engineering team, you will engage in the full engineering life cycle, spanning from designing and developing solutions to optimizing and deploying infrastructure at the scale of leading global enterprises. Your core responsibilities will include designing cloud solutions and distributed systems architecture for full-stack AI software and data solutions. You will be involved in implementing, testing, and managing Infrastructure as Code (IAC) for cloud-based solutions encompassing CI/CD, data integrations, APIs, web and mobile apps, and AI solutions. Collaborating with various teams such as product managers, data scientists, and fellow engineers, you will define and implement analytics and AI features that align with business requirements and user needs. Furthermore, your role will involve leveraging Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. You will be responsible for developing and maintaining APIs and microservices to expose analytics functionality to internal and external consumers, in adherence to best practices for API design and documentation. Implementing robust security measures to safeguard sensitive data and ensure compliance with data privacy regulations will also be a key aspect of your responsibilities. In addition, you will continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. Engaging in code reviews and contributing to the establishment and enforcement of coding standards and best practices will be essential to ensure the delivery of high-quality, maintainable code. Keeping abreast of emerging trends and technologies in cloud computing, data analytics, and software engineering will enable you to identify opportunities for enhancing the capabilities of the analytics platform. Collaborating closely with business consulting staff and leaders within multidisciplinary teams, you will assess opportunities and develop analytics solutions for our clients across various sectors. Your role will also involve influencing, educating, and providing direct support for the analytics application engineering capabilities of our clients. To excel in this role, you should possess a Master's degree in Computer Science, Engineering, or a related technical field, along with at least 6+ years of experience, including a minimum of 3+ years at the Staff level or equivalent. Proven experience as a cloud engineer and software engineer in product engineering or professional services organizations is essential. Additionally, experience in designing and delivering cloud-based distributed solutions and possessing certifications in GCP, AWS, or Azure would be advantageous. Deep familiarity with the software development lifecycle, configuration management tools, monitoring and analytics platforms, CI/CD deployment pipelines, backend APIs, Kubernetes, and containerization technologies is highly desirable. Your ability to work closely with internal and client teams, along with strong interpersonal and communication skills, will be critical for collaboration and effective technical discussions. Curiosity, proactivity, critical thinking, and a strong foundation in computer science fundamentals are qualities that we value. If you have a passion for innovation, a commitment to excellence, and a drive to make a meaningful impact in the field of cloud engineering, we invite you to join our dynamic team at Bain & Company.,
Posted 2 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
The Data Transformation Team is responsible for maintaining and evolving robust data transformation pipelines using dbt, enabling consistent and high-quality data delivery to power our BX data catalog and downstream analytics. We promote best practices in data engineering and work collaboratively across all BXTI teams to elevate the overall data maturity of the organization. The success of this role is measured across two core capability areas: 1) Database Design & Data Analysis Success in this role requires a strong foundation in data modeling and database design, with the ability to structure data that supports scalable and efficient analytics. The ideal candidate can analyze raw data, interpret business logic, and translate it into well-documented, tested data models. Familiarity with SDLC processes is essential, including requirement gathering, validation, and quality assurance of data assets. 2) Technical Execution & Infrastructure The ideal candidate has strong expertise in developing and managing data transformation pipelines using SQL and Python, with a focus on performance, scalability, and reliability. They should be well-versed in orchestrating workflows, deploying solutions in Snowflake, and working with AWS services across various environments. Experience with CI/CD using tools like Jenkins, along with Docker for containerization, is essential for maintaining robust and repeatable deployment processes. 3+ years of hands-on experience in data transformation , data warehouse/database development , and ETL processing on large-scale datasets. Proven expertise in SQL with deep understanding of complex queries , including joins, unions, CTEs, and conditional logic. Proficiency in Python for scripting, automation, and data manipulation. Comfortable working in Linux environments and with version control systems (e.g., Git). Experience working within SDLC frameworks and Agile methodologies in a professional software development setting. Experience with containerization technologies such as Docker , Kubernetes , or serverless architectures . Strong organizational and multitasking skills , with the ability to manage multiple parallel projects effectively. Excellent communication skills , both written and verbal, to collaborate with business and technical teams toward shared outcomes. Self-driven and proactive, with a strong customer-focused mindset. Working knowledge of Regular Expressions (Regex) . Bachelors degree in Computer Science , Information Systems , Data Science , or a related field. Preferred / Nice to Have Familiarity with dbt (data build tool) for managing and scaling transformation logic. Hands-on experience with business intelligence tools like Tableau or Sigma . Mandatory Skills: Data Warehousing. Experience: 3-5 Years.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Apply Digital is a global digital transformation partner for change agents, offering expertise in Business Transformation Strategy, Product Design & Development, Commerce, Platform Engineering, Data Intelligence, and Change Management. Our goal is to help clients modernize their organizations and deliver meaningful impact to both their business and customers, whether they are initiating, accelerating, or optimizing their digital transformation journey. We specialize in implementing composable tech and leveraging our experience in building smart products and utilizing AI tools to drive value. With over 650 team members, we have successfully transformed global companies like Kraft Heinz, NFL, Moderna, Lululemon, Atlassian, Sony, American Express, and Harvard Business School. Founded in 2016 in Vancouver, Canada, Apply Digital has expanded to nine cities across North America, South America, the UK, and Europe. We are excited to launch a new office in Delhi NCR, India. At Apply Digital, we embrace the "One Team" approach, operating within a "pod" structure that brings together senior leadership, subject matter experts, and cross-functional skill sets. Our teams work within a common tech and delivery framework supported by well-organized scrum and sprint cadences, ensuring alignment towards desired outcomes through regular retrospectives. We envision Apply Digital as a safe, empowered, respectful, and fun community wherever we operate globally. Our team strives to embody our SHAPE values - smart, humble, active, positive, and excellent - creating a space for connection, growth, and mutual support to make a difference every day. Apply Digital is a hybrid-friendly organization with remote options available. The preferred candidate for the role should be based in or within commutable distance to Delhi/NCR, India, working hours that overlap with the Eastern Standard Timezone (EST). The client is seeking an experienced Data Engineer to design, build, and maintain scalable data pipelines and architectures to support analytical and operational workloads. Responsibilities include developing and optimizing ETL/ELT pipelines, integrating data pipelines into cloud-native applications, managing cloud data warehouses, implementing data governance and security best practices, collaborating with analytics teams, maintaining data documentation, monitoring and optimizing data pipelines, and staying updated on emerging data engineering technologies. **Requirements:** - Strong proficiency in English (written and verbal communication). - Experience working with remote teams across different time zones. - 5+ years of data engineering experience with expertise in building scalable data pipelines. - Proficiency in SQL and Python for data modeling and processing. - Experience with Google Cloud Platform (GCP) and tools like BigQuery, Cloud Storage, and Pub/Sub. - Knowledge of ETL/ELT frameworks, workflow orchestration tools, data privacy, and security best practices. - Strong problem-solving skills and excellent communication abilities. **Nice to have:** - Experience with real-time data streaming solutions, machine learning workflows, BI tools, Terraform, and data integrations. - Knowledge of Infrastructure as Code (IaC) in data environments. Apply Digital offers comprehensive benefits including private healthcare coverage, contributions to Provident fund, gratuity bonus, flexible vacation policy, engaging projects with global impact, inclusive and safe work environment, learning opportunities, and a commitment to fostering an inclusive workplace. Apply Digital is dedicated to celebrating differences, promoting equal opportunity, and creating an inclusive culture where individual uniqueness is valued and recognized.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
As a Lead Platform Engineer at our esteemed organization, you will play a pivotal role in designing and constructing cloud-based distributed systems that tackle intricate business dilemmas for some of the largest companies globally. Leveraging your profound expertise in software engineering, cloud engineering, and DevOps, you will be instrumental in crafting technology stacks and platform components that empower cross-functional AI Engineering teams to develop robust, observable, and scalable solutions. Being part of a diverse and globally dispersed engineering team, you will engage in the complete engineering lifecycle, encompassing the design, development, optimization, and deployment of solutions and infrastructure on a scale that caters to the needs of the world's leading corporations. Your core responsibilities will revolve around: - Crafting cloud solution and distributed systems architecture for full stack AI software and data solutions - Implementing, testing, and managing Infrastructure as Code (IAC) of cloud-based solutions, inclusive of CI/CD, data integrations, APIs, web and mobile apps, and AI solutions - Defining and executing scalable, observable, manageable, and self-healing cloud-based solutions across AWS, Google Cloud, and Azure - Collaborating with diverse teams, including product managers, data scientists, and fellow engineers, to define and implement analytics and AI features that align with business requirements and user needs - Harnessing Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability - Developing and maintaining APIs and microservices to expose analytics functionality to both internal and external consumers, adhering to best practices for API design and documentation - Implementing robust security protocols to safeguard sensitive data and uphold compliance with data privacy regulations and organizational policies - Continuously monitoring and troubleshooting application performance, identifying and resolving issues that impact system reliability, latency, and user experience - Participating in code reviews and contributing to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code - Staying abreast of emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identifying opportunities to enhance the capabilities of the analytics platform - Collaborating closely with and influencing business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across various sectors - Influencing, educating, and directly supporting the analytics application engineering capabilities of our clients To be successful in this role, you should possess: - A Master's degree in Computer Science, Engineering, or a related technical field - 6+ years of experience, with at least 3+ years at the Staff level or equivalent - Proven expertise as a cloud engineer and software engineer in either product engineering or professional services organizations - Experience in designing and delivering cloud-based distributed solutions, with GCP, AWS, or Azure certifications being advantageous - Deep familiarity with software development lifecycle nuances - Proficiency in one or more configuration management tools such as Ansible, Salt, Puppet, or Chef - Proficiency in one or more monitoring and analytics platforms like Grafana, Prometheus, Splunk, SumoLogic, NewRelic, DataDog, CloudWatch, Nagios/Icinga - Experience with CI/CD deployment pipelines (e.g., Github Actions, Jenkins, Travis CI, Gitlab CI, Circle CI) - Experience in building backend APIs, services, and/or integrations using Python - Practical experience with Kubernetes through services like GKE, EKS, or AKS is beneficial - Ability to collaborate effectively with internal and client teams and stakeholders - Proficiency in Git for versioning and collaboration - Exposure to LLMs, Prompt engineering, Langchain is a plus - Experience with workflow orchestration tools like dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other - Experience in the implementation of large-scale structured or unstructured databases, orchestration, and container technologies such as Docker or Kubernetes - Strong interpersonal and communication skills, enabling you to explain and discuss complex engineering technicalities with colleagues and clients from different disciplines at their level of cognition - Curiosity, proactivity, and critical thinking - Sound knowledge of computer science fundamentals in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and the implications of computer architecture on software performance - Strong understanding of designing API interfaces - Knowledge of data architecture, database schema design, and database scalability - Familiarity with Agile development methodologies At our organization, Bain & Company, a global consultancy dedicated to assisting the world's most ambitious change-makers in shaping the future, we operate across 65 cities in 40 countries. Collaborating closely with our clients, we work as one team with a shared objective of achieving exceptional results, outperforming competitors, and redefining industries. Our commitment to investing more than $1 billion in pro bono services over the next decade reflects our dedication to supporting organizations addressing pressing challenges in education, racial equity, social justice, economic development, and the environment. With a platinum rating from EcoVadis, a prominent platform for environmental, social, and ethical performance ratings for global supply chains, we stand in the top 1% of all companies. Since our inception in 1973, we have gauged our success by the success of our clients and proudly maintain the highest level of client advocacy in the industry.,
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Kochi, Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems
Posted 3 weeks ago
5.0 - 8.0 years
8 - 18 Lacs
Hyderabad
Work from Office
We are looking for a highly skilled Data Platform Monitoring & Workflow Engineer to support our enterprise-level SAS-to-cloud migration program. In this role, you will be responsible for automating workflows, establishing monitoring systems , and building code compliance standards for data processing pipelines across SAS Viya and Snowflake platforms. Preferred candidate profile Experience with SAS tools or environments (training provided) Familiarity with Docker/Kubernetes Knowledge of cloud platforms (AWS or Azure) Unix/Linux system administration Experience building or customizing code compliance tools
Posted 3 weeks ago
12.0 - 16.0 years
0 Lacs
pune, maharashtra
On-site
As a key contributor at Avalara, you will be at the forefront of designing the canvas UI, shaping the DSL, and refining the workflow orchestration. Your creativity and passion for technology will be the driving force behind an integration revolution. You will be responsible for designing and implementing new features while maintaining existing functionalities. Writing optimized, testable, scalable, and production-ready code will be a crucial part of your role. Additionally, you will be accountable for writing microservices and APIs, participating in design discussions, building POC, and contributing to delivering high-quality products, features, and frameworks. Your involvement will span across all phases of the development lifecycle, including planning, design, implementation, testing, deployment, and support. You will be required to take necessary corrective measures to address problems, anticipate problem areas in new designs, and focus on optimization, performance, security, observability, scalability, and telemetry. Following agile/scrum processes and rituals, along with adhering to set coding standards, guidelines, and best practices, will be essential. Collaboration with teams and stakeholders to understand requirements and implement the best technical solutions that are simple and intuitive is a key aspect of the role. Furthermore, providing technical guidance and mentorship to junior engineers, fostering a culture of continuous learning and professional growth within the team, will be part of your responsibilities. Qualifications: - Bachelor/masters degree in computer science or equivalent. - 12+ years of full stack experience in a software development role, shipping complex applications to large-scale production environments. - Expertise in C# or Java programming language. - Knowledge of architectural styles and design patterns to solve complex problems with simple intuitive design. - Experience in architecting, building, and deploying (CI/CD) highly scalable distributed systems and frameworks for small businesses and enterprises. - Experience working in an Agile team with hands-on experience with TDD, BDD. - Experience converting monoliths to microservices or serverless architecture. - Experience with monitoring and alerting tools and analyzing system metrics to determine root cause analysis. - Curiosity to know more about how things work and always looking to improve code quality and development processes. - Excellent analytical and troubleshooting skills to solve complex problems and critical production issues. - Passion for delivering the best product in the business. - Excellent communication skills to collaborate with both technical and non-technical stakeholders. - Proven record of accomplishment of delivering high-quality software projects on time. About Avalara: Avalara is defining the relationship between tax and tech, with an industry-leading cloud compliance platform processing nearly 40 billion customer API calls and over 5 million tax returns a year. As a billion-dollar business, Avalara is continuously growing and expanding its tribe to achieve its mission of being part of every transaction globally. With a culture that empowers its people to win, Avalara instills passion and trust in its employees, fostering ownership and achievement. Join the Avalara team and experience a career that is as bright, innovative, and disruptive as the orange they love to wear.,
Posted 3 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
New Delhi, Ahmedabad, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Location: Remote- Delhi / NCR,Bangalore/Bengaluru, Hyderabad/Secunderabad,Chennai, Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 weeks ago
5.0 - 10.0 years
12 - 20 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Responsibilities :- Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins, GitHub Actions) Collaborate with developers and data engineers to enforce schema versioning, rollback strategies, and data contract enforcement. Preferred candidate profile 5+ years of experience in DataOps, DevOps, or data infrastructure roles. Proven experience with infrastructure-as-code (e.g., Terraform, CloudFormation). Proven experience with real-time data streaming platforms (e.g., Kinesis, Kafka). Proven experience building production-grade data pipelines and monitoring systems in AWS . Hands-on experience with tools like AWS Glue , S3 , Lambda , Athena , and CloudWatch . Strong knowledge of Python and scripting for automation and orchestration. Familiarity with data validation frameworks such as Great Expectations, Deequ, or dbt tests. Experience with SQL-based data systems (e.g., PostgreSQL). Understanding of security, IAM, and compliance best practices in cloud data environments.
Posted 1 month ago
4.0 - 7.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Medical Coding Certified Fresher Certifications (CPC/CPC-A/CCS/ COC) About R1RCM R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better. R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, recognized as one of Indias Top 50 Best Workplaces for Women 2024, amongst Indias Top 25 Best Workplaces in Diversity, Equity, Inclusion & Belonging 2024, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 17,000+ strong in India with presence in Delhi NCR, Hyderabad, Bengaluru, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Responsibilities:Assign codes to diagnoses and procedures, using ICD (International Classification of Diseases) and CPT (Current Procedural Terminology) codes.Ensure codes are accurate and sequenced correctly in accordance with government and insurance regulations.Follow up with the provider on any documentation that is insufficient or unclear.Communicate with other clinical staff regarding documentation.Search for information in cases where the coding is complex or unusual.Receive and review patient charts and documents for accuracy.Review the previous day's batch of patient notes for evaluation and coding.Ensure that all codes are current and active.:Education Any Graduate.Successful completion of a certification program from AHIMA or AAPC.Strong knowledge of anatomy, physiology, and medical terminology.Familiarity with ICD-10 & CPT codes and procedures.Solid oral and written communication skills.Able to work independently. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough