Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
haryana
On-site
You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, with experience in C# or Python development, as well as Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is necessary, and familiarity with AWS Kinesis and AWS Redshift is preferred. A strong desire to learn new technologies and skills is highly valued. Experience with unit testing and Test-Driven Development (TDD) methodology is considered an asset. You should possess strong team spirit, analytical skills, and the ability to synthesize information. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is important. Fluency in English is required due to the multicultural and international nature of the team. In this role, you will have the opportunity to develop your technical skills in C# .NET and/or Python, Oracle, PostgreSQL, AWS, ELK (Elasticsearch, Logstash, Kibana), GIT, GitHub, TeamCity, Docker, and Ansible.,
Posted 22 hours ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Agivant is seeking a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes. Responsibilities: Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness. Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines. Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS. Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs. Implement data quality checks and monitoring to ensure data integrity and identify potential issues. Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes. Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum, and emerging technologies in data engineering. Contribute to the development and enhancement of our data warehouse architecture. Requirements: - Bachelor's degree in Computer Science, Engineering, or a related field. - 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes. - At least 3+ years of experience in Snowflake data warehousing technologies. - At least 3+ years of experience in creating and maintaining Airflow ETL pipelines. - Minimum 3+ years of professional level experience with Python languages for data manipulation and automation. - Working experience with Elastic Search and its application in data pipelines. - Proficiency in SQL and experience with data modeling techniques. - Strong understanding of cloud-based data storage solutions such as AWS S3. - Experience working with NFS and other file storage systems. - Excellent problem-solving and analytical skills. - Strong communication and collaboration skills.,
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, C# or Python development, along with experience in Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is a must, and familiarity with AWS Kinesis and AWS Redshift is desirable. A genuine interest in mastering new technologies is essential for this role. Experience with unit testing and Test-Driven Development (TDD) methodology will be considered as assets. Strong team spirit, analytical skills, and the ability to synthesize information are key qualities we are looking for. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is highly valued. Being fluent in English is important as you will be working in a multicultural and international team. In this role, you will have the opportunity to develop your technical skills in the following areas: C# .NET and/or Python programming, Oracle and PostgreSQL databases, AWS services, ELK (Elasticsearch, Logstash, Kibana) stack, as well as version control tools like GIT and GitHub, continuous integration with TeamCity, containerization with Docker, and automation using Ansible.,
Posted 1 day ago
7.0 - 11.0 years
0 Lacs
delhi
On-site
As a CBRE Software Senior Engineer, you will work under broad direction to supervise, develop, maintain, and enhance client systems. This role is part of the Software Engineering job function and requires successfully executing and monitoring system improvements to increase efficiency. Responsibilities: - Develop, maintain, enhance, and test client systems of moderate to high complexity. - Execute the full software development life cycle (SDLC) to build high-quality, innovative, and performing software. - Conduct thorough code reviews to ensure high-quality code. - Estimate technical efforts of agile sprint stories. - Implement performance-optimized solutions and improve the performance of existing systems. - Serve as the primary technical point of contact on client engagements. - Investigate and resolve complex data system and software issues in the production environment. - Design and implement strategic partner integrations. - Participate in the specification and design of new features at client or business request. - Evaluate new platforms, tools, and technologies. - Coach others to develop in-depth knowledge and expertise in most or all areas within the function. - Provide informal assistance such as technical guidance, code review, and training to coworkers. - Apply advanced knowledge to seek and develop new, better methods for accomplishing individual and department objectives. - Showcase expertise in your job discipline and in-depth knowledge of other job disciplines within the organization function. - Lead by example and model behaviors consistent with CBRE RISE values. - Anticipate potential objections and persuade others, often at senior levels and of divergent interest, to adopt a different point of view. - Impact the achievement of customer operational project or service objectives across multidiscipline teams. - Contribute to new products, processes, standards, and/or operational plans in support of achieving functional goals. - Communicate difficult and complex ideas with the ability to influence. Qualifications: - Bachelor's Degree preferred with 7-9 years of relevant experience. In lieu of a degree, a combination of experience and education will be considered. - Knowledge of Java, Spring Boot, VueJS, Unit Testing, AWS services (ECS, Fargate, Lambda, RDS, S3, Step Functions), Bootstrap/CSS/CSS3, Docker, Dynamo DB, JavaScript/jQuery, Microservices, SNS, SpringBoot, and SQS. - Optional knowledge of .NET, Python, Angular, SQL Server, AppD, New Relic. - Innovative mentality to develop methods that go beyond existing solutions. - Ability to solve unique problems using standard and innovative solutions with a broad impact on the business. - Expert organizational skills with an advanced inquisitive mindset. Required Skills: - Angular - AWS API Gateway - AWS CloudFormation - AWS Lambda - AWS RDS - AWS S3 - AWS Step Functions - Bootstrap/CSS/CSS3 - Docker - Dynamo DB - Java - JavaScript/jQuery - Microservices - SNS - SpringBoot - SQS,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
The ideal candidate for this role should have strong skills in AWS EMR, EC2, AWS S3, Cloud Formation Template, Batch data, and AWS Code Pipeline services. It would be an added advantage to have experience with EKS. As this is a hands-on role, the candidate will be expected to have good administrative knowledge of AWS EMR, EC2, AWS S3, Cloud Formation Template, and Batch data. Responsibilities include managing and deploying EMR Clusters, with a solid understanding of AWS account and IAM. The candidate should also have experience in administrative tasks related to EMR Persistent Cluster and Transient Cluster. It is essential for the candidate to possess a good understanding of AWS Cloud Formation, cluster setup, and AWS network. Hands-on experience with Infrastructure as Code for Deployment tools like Terraform is highly desirable. Additionally, experience in AWS health monitoring and optimization is required. Knowledge of Hadoop and Big Data will be considered as an added advantage for this position.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You are a Senior Cloud Application Developer (AWS to Azure Migration) with 8+ years of experience. Your role involves hands-on experience in developing applications for both AWS and Azure platforms. You should have a strong understanding of Azure services for application development and deployment, including Azure IaaS and PaaS services. Your responsibilities include proficiency in AWS to Azure cloud migration, which involves service mapping and SDK/API conversion. You will also be required to perform code refactoring and application remediation for cloud compatibility. You should have a minimum of 5 years of experience in application development using Java, Python, Node.js, or .NET. Additionally, you must possess a solid understanding of CI/CD pipelines, deployment automation, and Azure DevOps. Experience with containerized applications, AKS, Kubernetes, and Helm charts is also necessary. Your role will involve application troubleshooting, support, and testing in cloud environments. Experience with the following tech stack is highly preferred: - Spring Boot REST API, NodeJS REST API - Apigee config, Spring Server Config - Confluent Kafka, AWS S3 Sync Connector - Azure Blob Storage, Azure Files, Azure Functions - Aurora PostgreSQL to Azure DB migration - EKS to AKS migration, S3 to Azure Blob Storage - AWS to Azure SDK Conversion Location options for this role include Hyderabad, Bangalore, or Pune. You should have a notice period of 10-15 days.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Senior Machine Learning Engineer Contractor specializing in AWS ML Pipelines, your primary responsibility will be to design, develop, and deploy advanced ML pipelines within an AWS environment. You will work on cutting-edge solutions that automate entity matching for master data management, implement fraud detection systems, handle transaction matching, and integrate GenAI capabilities. The ideal candidate for this role should possess extensive hands-on experience in AWS services like SageMaker, Bedrock, Lambda, Step Functions, and S3. Moreover, you should have a strong command over CI/CD practices to ensure a robust and scalable solution. Your key responsibilities will include designing and developing end-to-end ML pipelines focusing on entity matching, fraud detection, and transaction matching. You will be integrating generative AI solutions using AWS Bedrock to enhance data processing and decision-making. Collaboration with cross-functional teams to refine business requirements and develop data-driven solutions tailored to master data management needs will also be a crucial aspect of your role. In terms of AWS ecosystem expertise, you will be required to utilize SageMaker for model training, deployment, and continuous improvement. Additionally, leveraging Lambda and Step Functions to orchestrate serverless workflows for data ingestion, preprocessing, and real-time processing will be part of your daily tasks. Managing data storage, retrieval, and scalability concerns using AWS S3 will also be within your purview. Furthermore, you will need to develop and integrate automated CI/CD pipelines to streamline model testing, deployment, and version control. Ensuring rapid iteration and robust deployment practices to maintain high availability and performance of ML solutions will be essential. Data security and compliance will be a critical aspect of your role. You will need to implement security best practices to safeguard sensitive data, ensuring compliance with organizational and regulatory requirements. Incorporating monitoring and alerting mechanisms to maintain the integrity and performance of deployed ML models will be part of your responsibilities. Collaboration and documentation will also play a significant role in your day-to-day activities. Working closely with business stakeholders, data engineers, and data scientists to ensure solutions align with evolving business needs will be crucial. You will also need to document all technical designs, workflows, and deployment processes to support ongoing maintenance and future enhancements. Providing regular progress updates and adapting to changing priorities or business requirements in a dynamic environment are expected. To qualify for this role, you should have at least 5+ years of professional experience in developing and deploying ML models and pipelines. Proven expertise in AWS services including SageMaker, Bedrock, Lambda, Step Functions, and S3 is necessary. Strong proficiency in Python and/or PySpark, demonstrated experience with CI/CD tools and methodologies, and practical experience in building solutions for entity matching, fraud detection, and transaction matching within a master data management context are also required. Familiarity with generative AI models and their application within data processing workflows will be an added advantage. Strong analytical and problem-solving skills are essential for this role. You should be able to transform complex business requirements into scalable technical solutions and possess strong data analysis capabilities with a track record of developing models that provide actionable insights. Excellent verbal and written communication skills, the ability to work independently as a contractor while effectively collaborating with remote teams, and a proven record of quickly adapting to new technologies and agile work environments are also preferred qualities for this position. A Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field is a plus. Experience with additional AWS services such as Kinesis, Firehose, and SQS, prior experience in a consulting or contracting role demonstrating the ability to manage deliverables under tight deadlines, and experience within industries where data security and compliance are critical will be advantageous.,
Posted 2 days ago
6.0 - 8.0 years
11 - 13 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
iSource Services is hiring for one of their client for the position of Java Developer About the role: We are looking for a skilled Java Developer with strong expertise in Spring Boot, Microservices, and AWS to join our growing team. The ideal candidate must have a proven track record of delivering scalable backend solutions and a minimum of 4 years of hands-on experience with AWS services. Key Responsibilities: Develop and maintain high-performance Java applications using Spring Boot and Microservices architecture Integrate with AWS services including Lambda, DynamoDB, SQS, SNS, S3, ECS, and EC2 Work with event-driven architecture using Kafka Collaborate with cross-functional teams to define, design, and ship new features Ensure the performance, quality, and responsiveness of applications Required Skills: Strong proficiency in Java (8+), Spring Boot, and Microservices Minimum 4 years of hands-on experience with AWS (Lambda, DynamoDB, SQS, SNS, S3, ECS, EC2) Experience with Kafka for real-time data streaming Solid understanding of system design, data structures, and algorithms Excellent problem-solving and communication skills.
Posted 3 days ago
3.0 - 8.0 years
0 Lacs
kochi, kerala
On-site
As a Tech Lead Full Stack at Qubryx, a US based Product Consulting and Development company, you will play a crucial role in leading the development, implementation, and maintenance of software solutions and applications for both client and company web-based products. This is a full-time remote role with the opportunity to work on designing and developing user interfaces, testing, and debugging code. While the role is primarily located in Kochi, there is flexibility for remote work as well. To be considered for this position, you should have at least 8 years of experience in full cycle software development projects and a minimum of 3 years of experience as a Tech Lead. You should have a proven track record of designing and developing software applications from scratch. Proficiency in JavaScript, Typescript, Node.js, and strong skills in No-SQL MongoDB designing and querying are essential for this role. Additionally, you should possess strong SQL skills and experience working with SQL Server and Postgres. Experience with AWS Lambda, S3, RDS, API Gateway, as well as familiarity with front-end UI frameworks such as React and React Native are highly desirable. Knowledge of Scrum methodologies, sprint planning, project planning, estimation, and product feature management is crucial for success in this role. You should have experience managing teams of developers, providing technical guidance, and fostering a collaborative team environment. Preferred qualifications include AWS Certifications, experience with Docker, containers, Kubernetes, and microservices, as well as proficiency in Python with past Java or .NET experience. Experience with serverless coding on AWS Lambda or Azure Functions, Azure Devops, and working in Scrum Teams are advantageous. A deep understanding of DevOps and SRE principles, along with experience implementing DevOps best practices, is also preferred. As a Tech Lead Full Stack, you should be a self-starter with excellent problem-solving skills and strong verbal and written communication abilities. You should be comfortable working independently as well as collaborating closely with other team members, both offshore and onsite. You should have the ability to code new features, troubleshoot problems, and identify areas for improvement. If you are a highly motivated individual with a passion for software development and a willingness to learn and grow with the team, we encourage you to apply for this exciting opportunity. Join us at Qubryx and be part of a dynamic team that values innovation, collaboration, and continuous improvement. Benefits include competitive compensation and the opportunity to work on cutting-edge projects with a talented team. To qualify for this role, you should have a bachelor's degree and a minimum of 8 years of experience in relevant technologies.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Full-Stack Developer with 5+ years of experience in the MERN stack, you will be responsible for proficiently handling backend development tasks using Node.js, Express.js, and AWS Lambda. Your strong hands-on experience with MongoDB, AWS Neptune, Redis, and other databases will be essential for the successful execution of projects. Additionally, your expertise in front-end development using React.js, HTML, CSS, and JavaScript (ES6+) will play a crucial role in delivering high-quality user interfaces. Your familiarity with AWS services such as Lambda, API Gateway, S3, CloudFront, IAM, and DynamoDB will be advantageous for integrating and deploying applications effectively. Experience with DevOps tools like GitHub Actions, Jenkins, and AWS CodePipeline will be required to streamline the development process. Proficiency in Git-based workflows and hands-on experience with Agile methodologies and tools like JIRA will be necessary for collaborative and efficient project management. In terms of technical skills development, you should possess expertise in React.js with Redux, Context API, or Recoil, along with HTML5, CSS3, JavaScript (ES6+), and TypeScript. Knowledge of Material UI, Tailwind CSS, Bootstrap, and performance optimization techniques will be crucial for creating responsive and visually appealing web applications. Your proficiency in Node.js & Express.js, AWS Lambda, RESTful APIs & GraphQL, and authentication & authorization mechanisms like JWT, OAuth, and AWS Cognito will be key for building robust server-side applications. Moreover, your familiarity with Microservices, event-driven architecture, MongoDB & Mongoose, AWS Neptune, Redis, and AWS S3 for object storage will be essential for developing scalable and efficient applications. Understanding Cloud & DevOps concepts such as AWS services, Infrastructure as Code (IaC), CI/CD Pipelines, and Monitoring & Logging tools will be necessary for deploying and maintaining applications in a cloud environment. Your soft skills, including strong problem-solving abilities, excellent communication skills, attention to detail, and the ability to mentor junior developers will be crucial for collaborating with cross-functional teams and providing technical guidance. Your adaptability to learn and work with new technologies in a fast-paced environment will be essential for staying updated and delivering innovative solutions effectively.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Engineer, you will be responsible for developing and maintaining a metadata-driven generic ETL framework to automate ETL code. Your primary tasks will include designing, building, and optimizing ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. You will be required to ingest data from a variety of structured and unstructured sources such as APIs, RDBMS, flat files, and streaming services. In this role, you will also develop and maintain robust data pipelines for both batch and streaming data utilizing Delta Lake and Spark Structured Streaming. Implementing data quality checks, validations, and logging mechanisms will be essential to ensure data accuracy and reliability. You will work on optimizing pipeline performance, cost, and reliability and collaborate closely with data analysts, BI teams, and business stakeholders to deliver high-quality datasets. Additionally, you will support data modeling efforts, including star and snowflake schemas, de-normalization tables approach, and assist in data warehousing initiatives. Your responsibilities will also involve working with orchestration tools like Databricks Workflows to schedule and monitor pipelines effectively. To excel in this role, you should have hands-on experience in ETL/Data Engineering roles and possess strong expertise in Databricks (PySpark, SQL, Delta Lake). Experience with Spark optimization, partitioning, caching, and handling large-scale datasets is crucial. Proficiency in SQL and scripting in Python or Scala is required, along with a solid understanding of data lakehouse/medallion architectures and modern data platforms. Knowledge of cloud storage systems like AWS S3, familiarity with DevOps practices (Git, CI/CD, Terraform, etc.), and strong debugging, troubleshooting, and performance-tuning skills are also essential for this position. Following best practices for version control, CI/CD, and collaborative development will be a key part of your responsibilities. If you are passionate about data engineering, enjoy working with cutting-edge technologies, and thrive in a collaborative environment, this role offers an exciting opportunity to contribute to the success of data-driven initiatives within the organization.,
Posted 3 days ago
6.0 - 8.0 years
17 - 18 Lacs
Hyderabad
Work from Office
Position - Software Developer (Angular,React,Vue.js,Python,Django,Ruby,Lambda,SQS,S3) Qualifications: 5+ year of software development and 3+ years of Python development experience 1+ Ruby experience preferred 3+ years of experience with web frameworks (preferred: Rails or Rack, Django) 1+ years of Angular, React, or Vue.js Demonstrated experience with AWS Services (services preferred: Lambda, SQS, S3) Experience working in a software product driven environment Demonstrable knowledge of front-end technologies such as JavaScript, HTML5, CSS3 Workable knowledge of relational databases (ex: MySQL, Postgres) BS/MS degree in Computer Science or equivalent experience Knowledge of version control, such as Git Familiarity with Docker (containerized environment) Knowledge of testing libraries (ideally: rspec, pytest, jest) Experience with Linters (ex: RuboCop, Flake8, ESLint)
Posted 4 days ago
3.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You are a solid hands-on engineer in the video algorithm domain with expertise in developing video compression algorithms for cloud and mobile applications. Your role involves developing video software algorithms using various codecs like H.264 for applications such as mobile video sharing, cloud-based video encoding, and optimizing video delivery in broadcast and surveillance domains. As a developer in this role, you will be part of a core video team dedicated to enhancing user experience and reducing video delivery costs. You should have a solid understanding of video compression fundamentals and practical experience with codecs like H.264, H.265, AV1, and VVC. Knowledge of Media Codec frameworks on Android and iOS platforms is essential, along with strong programming skills in C/C++ on Linux. Experience in the video streaming domain and familiarity with protocols such as HTTP, RTP, RTSP, and WebRTC are necessary. Additionally, you should have a thorough understanding of HLS, MPEG-DASH, MP4, fMP4, and MOV file formats. Desirable experience includes working with operating systems like Linux, iOS, and Android, media frameworks such as Android MediaCodec Framework and iOS Video Toolbox, and source control tools like Git. Proficiency in open-source media frameworks like FFmpeg and GStreamer, video filters, scaling, denoiser, blending algorithms, and machine learning techniques for video compression algorithms is highly valued. An understanding of OS internals like I/O, networking, and multi-threading is also important. Your specific responsibilities will include developing Video Compression SDKs for mobile devices, addressing challenges related to video processing, developing new video algorithms using the latest codecs, and improving video content quality and efficiency. You will collaborate with cross-functional teams locally and globally, maintain and extend software components for customer deployments, and work in a fast-paced development environment following SDLC. To excel in this role, you must be well-organized, willing to take on development challenges, and eager to learn new video technologies. You should have at least 8 years of experience in video compression, knowledge of media frameworks for iOS and Android, and familiarity with tools like GStreamer and FFMPEG. Experience with codecs like H.265, VP9, building SDKs, AWS S3, Agile methodologies, and video stream analysis tools is beneficial. A Master's degree in Computer Science or Engineering is preferred. If you meet these requirements and are ready to contribute to a dynamic engineering environment focused on advancing video technology, please send your CV to careers@crunchmediaworks.com.,
Posted 5 days ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad
Remote
Tech stack- Database: Mongodb: S3 Postgres Strong experience on Data pipelines; mapping React; Node; Python Aws; Lambda About the job Summary We are seeking a detail-oriented and proactive Data Analyst to lead our file and data operations, with a primary focus on managing data intake from our clients and ensuring data integrity throughout the pipeline. This role is vital to our operational success and will work cross-functionally to support data ingestion, transformation, validation, and secure delivery. The ideal candidate must have hands-on experience with healthcare datasets, especially medical claims data, and be proficient in managing ETL processes and data operations at scale. Responsibilities File Intake & Management Serve as the primary point of contact for receiving files from clients, ensuring all incoming data is tracked, validated, and securely stored. Monitor and automate data file ingestion using tools such as AWS S3, AWS Glue, or equivalent technologies. Troubleshoot and resolve issues related to missing or malformed files and ensure timely communication with internal and external stakeholders. Data Operations & ETL Develop, manage, and optimize ETL pipelines for processing large volumes of structured and unstructured healthcare data. Perform data quality checks, validation routines, and anomaly detection across datasets. Ensure consistency and integrity of healthcare data (e.g., EHR, medical claims, ICD/CPT/LOINC codes) during transformations and downstream consumption. Data Analysis & Reporting Collaborate with data science and analytics teams to deliver operational insights and performance metrics. Build dashboards and visualizations using Power BI or Tableau to monitor data flow, error rates, and SLA compliance. Generate summary reports and audit trails to ensure HIPAA-compliant data handling practices. Process Optimization Identify opportunities for automation and efficiency in file handling and ETL processes. Document procedures, workflows, and data dictionaries to standardize operations. Required Qualifications Bachelors or Master’s degree in Health Informatics, Data Analytics, Computer Science, or related field. 5+ years of experience in a data operations or analyst role with a strong focus on healthcare data. Demonstrated expertise in working with medical claims data, EHR systems, and healthcare coding standards (e.g., ICD, CPT, LOINC, SNOMED, RxNorm). Strong programming and scripting skills in Python and SQL for data manipulation and automation. Hands-on experience with AWS, Redshift, RDS, S3, and data visualization tools such as Power BI or Tableau. Familiarity with HIPAA compliance and best practices in handling protected health information (PHI). Excellent problem-solving skills, attention to detail, and communication abilities.
Posted 6 days ago
9.0 - 12.0 years
14 - 24 Lacs
Gurugram
Remote
We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Role & responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modelling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Preferred candidate profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,
Posted 1 week ago
7.0 - 13.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining HCL Technologies, a renowned global technology company dedicated to helping enterprises transform their businesses for the digital era. With a strong foundation in innovation, a distinguished management philosophy, and a commitment to customer relationships, HCL is at the forefront of technological advancement. Embracing diversity, social responsibility, sustainability, and education, HCL operates across 52 countries with over 197,000 Ideapreneurs, serving leading enterprises worldwide. As a Java Full Stack Developer, you will play a pivotal role in shaping the future through your expertise and passion. Leveraging your 7 to 13 years of experience, you will be responsible for developing solutions using Java 8, Angular, Spring Boot, Spring Cloud, and Microservices. Your proficiency in J2EE technologies, ORM, Hibernate, SOAP services, REST services, and various other tools and principles will be crucial in delivering holistic services to our clients. Key Qualifications: - Bachelor's or Master's degree in Engineering or Computer Applications - Proficiency in Java8, Angular, Spring Boot, and Microservices - Strong experience in OOPs principles, Java design patterns, and multithreading - Expertise in SQL and NoSQL databases like Oracle, Cassandra, or Couchbase - Familiarity with Microservices architectural patterns and solution design - Knowledge of tools such as Jenkins, Git, CI/CD, and AWS At HCLTech, we prioritize your growth and development. You will have the opportunity to explore your potential, receive mentorship from senior leaders, and participate in learning and career development programs tailored to your needs. Our agile work environment, global presence, and commitment to employee well-being make us a preferred employer for those seeking a fulfilling and rewarding career. Join us at HCL Technologies and be part of a dynamic, diverse, and innovative team that values your unique contributions and offers comprehensive benefits, growth opportunities, and a positive work environment. Your journey with us will be defined by continuous learning, collaboration, and the pursuit of excellence in a virtual-first work culture that fosters work-life integration and flexibility. Come be a part of our success story and unleash your full potential with us.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior QA Test Analyst located in Bangalore, India, you will be an integral part of our agile development team, collaborating with talented and dedicated colleagues. Your role will require a keen attention to detail, strong analytical and diagnostic skills, and a motivation to excel within a dynamic and adaptable work environment. You will primarily work with tools such as Cucumber, NodeJS, Typescript, Playwright, Jest, Git, and AWS S3. Your responsibilities will include understanding feature-centric test design techniques and incorporating them into test plans and strategies. You should possess a deep technical understanding of integrated systems to effectively identify and address issues in the Application Under Test (AUT). Additionally, a solid grasp of BDD (Behavior Driven Development) concepts is essential to derive precise features and scenarios from PRDs, epics, and user stories. Your coding skills in Typescript should be top-notch, ensuring optimized and industry-standard code following best practices. Moreover, you should have a good understanding of test frameworks, Separation of Concerns (SOC), GitHub, code review processes, and branching. Experience with REST API testing using standard libraries is a must. As a senior engineer, you will be expected to mentor junior team members and collaborate effectively within a global team setting. Key skills for this role include: - 4-6 years of Test Automation experience in UI and API automation using JavaScript or Typescript - Proficiency in BDD, Node JS, and UI testing - Exposure to test frameworks such as jest, mocha, protractor, jasmine, or cucumber - Familiarity with automation frameworks like Playwright, Puppeteer, WDIO, or Cypress To be successful in this position, you should hold a Bachelor's degree in Computer Science, MIS, or a related field, along with 3-5 years of software testing experience for SaaS products. This is a hybrid role based at The Leela Office in Bangalore, with an expectation of working from the office on Tuesdays, Wednesdays, and Thursdays, and from home on Mondays and Fridays. About Notified: Notified is dedicated to fostering a more connected world by empowering individuals and brands to amplify their stories. With a platform that enhances public relations, investor relations, and marketing efforts for over 10,000 global customers, we believe in the power of storytelling. As a leader in enterprise webcasting, investor relations content distribution, and press release distribution, we help clients worldwide monitor social media conversations and host numerous events annually. At Notified, we prioritize maintaining a healthy work-life balance and providing opportunities for self-development and growth. Our employees have access to our internal learning and development university, DevelopU, which offers a wide range of courses and resources for career advancement. We offer a hybrid work schedule, comprehensive health insurance, and various location-specific social events and outings. Join us at Notified to be part of an international work environment with opportunities for innovation, creativity, and personal growth. Experience best-in-class solutions and be a part of an award-winning team that is passionate about helping individuals and brands amplify their stories globally.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled Veeam Backup Administrator to manage and maintain backup, replication, and disaster recovery solutions using Veeam Backup & Replication. The ideal candidate should have hands-on experience configuring backup solutions across on-premise and cloud environments, with a focus on automation, reporting, and BCP/DR planning. Key Responsibilities: - Manage and configure Veeam Backup & Replication infrastructure - Schedule and monitor backup, backup copy, and replication jobs - Set up backup and copy jobs from on-prem to AWS S3 - Configure and manage Veeam ONE for performance monitoring and reporting - Automate and schedule reports for backup and replication job statuses - Configure Veeam Enterprise Manager for centralized backup administration - Set up tape backups within Veeam environment - Implement Immutable repositories for enhanced data security - Configure storage snapshots in DD Boost and Unity storage - Design and execute BCP/DR strategies and perform server-level testing for recovery readiness Required Skills: - Hands-on experience with Veeam Backup & Replication - Proficiency in Veeam ONE, Enterprise Manager, and tape backup configuration - Experience in backup to cloud storage (AWS S3) - Strong understanding of Immutable backups and snapshot technology - Knowledge of DD Boost, Unity storage, and storage replication - Experience in BCP/DR planning and execution - Good troubleshooting and documentation skills Technical Key Skills: Veeam Backup, Replication, AWS S3, Veeam ONE, Enterprise Manager, Tape Backup, Immutable Backup, DD Boost, Unity Storage, BCP/DR Location: Bangalore, India Experience: 4+ Years,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an offshore Techlead with Databricks engineer experience, your primary responsibility will be to lead the team from offshore. You will be tasked with developing and maintaining a metadata-driven generic ETL framework for automating ETL code. This includes designing, building, and optimizing ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. Your role will involve ingesting data from various structured and unstructured sources such as APIs, RDBMS, flat files, and streaming. Moreover, you will be expected to develop and maintain robust data pipelines for both batch and streaming data using Delta Lake and Spark Structured Streaming. Implementing data quality checks, validations, and logging mechanisms will also be part of your responsibilities. It will be crucial for you to optimize pipeline performance, cost, and reliability, while collaborating with data analysts, BI, and business teams to deliver fit-for-purpose datasets. You will also support data modeling efforts, including star, snowflake schemas, and de-norm tables approach, as well as assist with data warehousing initiatives. Working with orchestration tools like Databricks Workflows to schedule and monitor pipelines will be essential. Following best practices for version control, CI/CD, and collaborative development is expected from you. In terms of required skills, you should have hands-on experience in ETL/Data Engineering roles and strong expertise in Databricks (PySpark, SQL, Delta Lake), with Databricks Data Engineer Certification being preferred. Experience with Spark optimization, partitioning, caching, and handling large-scale datasets is crucial. Proficiency in SQL and scripting in Python or Scala is required, along with a solid understanding of data lakehouse/medallion architectures and modern data platforms. Additionally, experience working with cloud storage systems like AWS S3, familiarity with DevOps practices (Git, CI/CD, Terraform, etc.), and strong debugging, troubleshooting, and performance-tuning skills are necessary for this role. In summary, as an offshore Techlead with Databricks engineer experience, you will play a vital role in developing and maintaining ETL frameworks, optimizing data pipelines, collaborating with various teams, and ensuring data quality and reliability. Your expertise in Databricks, ETL processes, data modeling, and cloud platforms will be instrumental in driving the success of the projects you undertake. About Virtusa: At Virtusa, we value teamwork, quality of life, and professional and personal development. Joining our team means becoming part of a global workforce of 27,000 individuals who are dedicated to your growth. We offer exciting projects, opportunities, and exposure to state-of-the-art technologies throughout your career with us. We believe in collaboration, a team-oriented environment, and providing a dynamic space for great minds to nurture new ideas and achieve excellence.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The ideal candidate for this role should possess the following technical skills: - Proficiency in Java/J2EE, Spring/Spring Boot/Quarkus Frameworks, Microservices, Angular, Oracle, PostgreSQL, MongoDB - Experience with AWS services such as S3, Lambda, EC2, EKS, CloudWatch - Familiarity with Event Streaming using Kafka, Docker, and Kubernetes - Knowledge of GitHub and experience with CI/CD Pipeline In addition to the above, it would be beneficial for the candidate to also have the following technical skills: - Hands-on experience with cloud platforms like AWS, Azure, or GCP - Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI/CD - Familiarity with monitoring and logging tools such as Prometheus and Grafana Overall, the successful candidate will be someone with a strong technical background in various technologies and platforms, along with the ability to adapt to new tools and frameworks as needed.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for fetching and transforming data from various systems, conducting in-depth analyses to identify gaps, opportunities, and insights, and providing recommendations that support strategic business decisions. Your key responsibilities will include data extraction and transformation, data analysis and insight generation, visualization and reporting, collaboration with cross-functional teams, and building strong working relationships with external stakeholders. You will report to the VP Business Growth and work closely with clients. To excel in this role, you should have proficiency in SQL for data querying and Python for data manipulation and transformation. Experience with data engineering tools such as Spark and Kafka, as well as orchestration tools like Apache NiFi and Apache Airflow, will be essential for ETL processes and workflow automation. Expertise in data visualization tools such as Tableau and Power BI, along with strong analytical skills including statistical techniques, will be crucial. In addition to technical skills, you should possess soft skills such as flexibility, excellent communication skills, business acumen, and the ability to work independently as well as within a team. Your academic qualifications should include a Bachelors or Masters degree in Applied Mathematics, Management Science, Data Science, Statistics, Econometrics, or Engineering. Extensive experience in Data Lake architecture, building data pipelines using AWS services, proficiency in Python and SQL, and experience in the banking domain will be advantageous. Overall, you should demonstrate high motivation, a good work ethic, maturity, personal initiative, and strong oral and written communication skills to succeed in this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Vola Finance is a rapidly expanding fintech company that is transforming the landscape of financial access and management. Our cutting-edge platform empowers individuals to enhance their financial well-being and take charge of their expenditures through a range of innovative tools and solutions. With the support of top-tier investors, we are dedicated to crafting products that have a significant positive impact on the lives of our users. Our founding team comprises enthusiastic leaders with extensive backgrounds in finance and technology. Drawing upon their vast experience from leading global corporations, they are committed to cultivating a culture of creativity, teamwork, and excellence within our organization. As a member of our team, your primary responsibilities will include: - Developing churn prediction models utilizing advanced machine learning algorithms based on user transactional and behavioral data - Constructing regression models to predict users" income and balances using transaction data - Creating customer segmentation and recommendation engines for cross-selling initiatives - Building natural language processing models to gauge customer sentiment - Developing propensity models and conducting lifetime value (LTV) analysis - Establishing modern data pipelines and processing systems using AWS PAAS components like Glue and Sagemaker Studio - Utilizing API tools such as REST, Swagger, and Postman - Deploying models in the AWS environment and managing the production setup - Collaborating effectively with cross-functional teams to collect data and derive insights Essential Technical Skill Set: 1. Prior experience in Fintech product and growth strategy 2. Proficiency in Python 3. Strong grasp of linear regression, logistic regression, and tree-based machine learning algorithms 4. Sound knowledge of statistical analysis and A/B testing 5. Familiarity with AWS services such as Sagemaker, S3, EC2, and Docker 6. Experience with REST API, Swagger, and Postman 7. Proficiency in Excel 8. Competence in SQL 9. Ability to work with visualization tools like Redash or Grafana 10. Familiarity with versioning tools like Bitbucket, Github, etc.,
Posted 1 week ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with Sanjeevan Natarajan - 94866 21923 sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough