Hyderabad
INR 20.0 - 30.0 Lacs P.A.
Remote
Full Time
Note: Looking for Immediate Joiners and timings 5:30 pm - 1:30 am IST (Remote) Project Overview (If Possible): Its one of the workstreams of Project Acuity. Client Data Platform includes centralized web application for internal platform users across the Recruitment Business to support marketing and operational use cases. Building a database at the patient level will provide significant benefit to Client future reporting capabilities and engagement of external stakeholders. Role Scope / Deliverables: We are looking for an experienced AWS Data Engineer to join our dynamic team, responsible for developing, managing, and optimizing data architectures. The ideal candidate will have extensive experience in integrating large-scale datasets, building scalable and automated data pipelines. The candidate should also have experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Must Have Skills: Proficiency in programming languages such as Python, Scala, or similar. Strong experience in data classification, including the identification of PII data entities. Ability to leverage AWS services (e.g., SageMaker, Comprehend, Entity Resolution) to solve complex data related challenges. Strong analytical and problem-solving skills, with the ability to innovate and develop new approaches to data engineering Experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Experience in core AWS Services including AWS IAM, VPC, EC2, S3, RDS, Lambda, CloudWatch, CloudTrail. Nice to Have skills: Experience with data privacy and compliance requirements, especially related to PII data. Familiarity with advanced data indexing techniques, vector databases, and other technologies that improve the quality of outputs.
Hyderabad
INR 20.0 - 30.0 Lacs P.A.
Remote
Full Time
Looking for Fulltime, Consulting or Freelance Experts with Alation Experience Note: Looking for Immediate Joiners only Summarized Purpose: We are seeking an exceptionally skilled and motivated Solution Architect/Data Governance Lead who can play a pivotal role in the realm of data analytics and business intelligence, driving impactful solutions that empower organizations to harness the full potential of data and enabling to make informed decisions that can help to achieve the business objectives and foster a data-driven culture. Essential Functions: Design and architect end-to-end data governance solutions, focusing on the implementation of the Alation tool, to meet the organization's data governance objectives and requirements. Collaborate with business stakeholders, data stewards, and IT teams to understand data governance needs and translate them into technical requirements, leveraging the capabilities of the Alation tool. Develop data governance strategies, policies, and standards that align with industry best practices and regulatory requirements, leveraging the Alation tool's features and functionalities. Implement and configure the Alation tool to support data governance initiatives, including data cataloging, data lineage, data quality monitoring, and metadata management. Define and implement data governance workflows and processes within the Alation tool, ensuring efficient data governance operations across the organization. Collaborate with cross-functional teams to integrate the Alation tool with existing data systems and infrastructure, ensuring seamless data governance processes. Conduct data profiling, data quality assessments, and data lineage analysis using the Alation tool to identify data issues and develop remediation strategies. Provide guidance and support to business stakeholders, data stewards, and data owners on the effective use of the Alation tool for data governance activities. Stay updated on the latest trends, emerging technologies, and best practices in data governance and the Alation tool, and proactively recommend enhancements and improvements. Collaborate with IT teams to ensure the successful implementation, maintenance, and scalability of the Alation tool, including upgrades, patches, and configurations. Knowledge, Skills and Abilities: Bachelor's degree in computer science, information systems, or a related field. A master's degree is preferred. Must have minimum 15 years experience in IT who has min 3 years experience in Alation and min 10+ years experience in SQL, EDW or any Data Engineering/Data Science capabilities. Proven experience as a Data Governance Solution Architect, Data Architect, or similar role, with hands-on experience implementing data governance solutions using the Alation tool (Must Have) Strong expertise in designing and implementing end-to-end data governance frameworks and solutions, leveraging the Alation tool (Must Have) In-depth knowledge of data governance principles, data management best practices, and regulatory requirements (e.g., GDPR, CCPA). Proficiency in configuring and customizing the Alation tool, including data cataloging, data lineage, data quality, and metadata management features. (Must Have) Experience in data profiling, data quality assessment, and data lineage analysis using the Alation tool or similar data governance platforms. (Must Have) Familiarity with data integration, data modeling, and data architecture concepts. Excellent analytical, problem-solving, and decision-making skills with a keen attention to detail. Strong communication and interpersonal skills with the ability to effectively collaborate with cross-functional teams and influence stakeholders. Proven ability to manage multiple projects simultaneously, prioritize tasks, and meet deadlines. Professional certifications in data governance, such as CDMP (Certified Data Management Professional), DAMA-CDMP (Data Management Association Certified Data Management Professional) , or similar certifications, are a plus. (Nice to Have) Good Understanding of Master Data Management, Data Integration and SQL Skills Exposure to dimensional data modeling, Data Vault Modeling, ETL, ELT and data warehousing methodologies. Ability to effectively communicate in writing and orally with a wide range of audiences and maintain interpersonal relationships. Ability to work within time constraints and manage multiple tasks against critical deadlines. Ability to perform problem solving and apply critical thinking, deductive reasoning, and inductive reasoning to identify solutions. Certifications from Alation are desired (Must Have )
Hyderabad
INR 10.0 - 12.0 Lacs P.A.
Remote
Full Time
Role: BI Analytics Specialist Location: Remote role (Should work in EST hours, Night Shift in India) Work time: 6:00 PM to 3:30 AM IST Duration: 6 Months with possible extension Key Responsibilities: ThoughtSpot Architecture and Implementation: Design and implement ThoughtSpot solutions that meet business requirements. Configure and optimize ThoughtSpot environments for performance and scalability. Develop and maintain data models within ThoughtSpot. Business Intelligence and Analytics: Gather and analyze business requirements to create BI solutions. Design and develop interactive dashboards, reports, and data visualizations. Conduct data analysis to support business decision-making processes. Data Integration and Management: Integrate ThoughtSpot with various data sources, including Databricks, Starburst and AWS. Ensure data quality, integrity, and security within the BI solutions. Collaborate with data engineering teams to establish ETL processes. AWS: Leverage AWS services (e.g., S3, Redshift, Glue, Lambda) for data storage, processing, and analytics. Design and implement scalable data architectures on AWS. Ensure security and compliance of data solutions on AWS. Collaboration and Communication: Work closely with business stakeholders, data engineers, and data scientists to understand requirements and deliver solutions. Provide training and support to end-users on ThoughtSpot and BI tools. Stay updated with the latest industry trends and best practices in BI, ThoughtSpot, Starburst, Databricks, and AWS. Qualifications: Bachelors degree in computer science, Information Technology, Data Science, or a related field. Master’s degree preferred. 5+ years of experience in business intelligence and data analytics. Proven experience with ThoughtSpot , including architecture, implementation, and administration. Strong proficiency in Starburst and Databricks for data processing and analytics. Extensive experience with AWS services related to data analytics (S3, Redshift, Glue, Lambda, etc.). Excellent SQL skills and experience with ETL tools and processes. Strong analytical, problem-solving, and communication skills. Ability to work independently and as part of a team in a fast-paced environment. Knowledge of data governance and security best practices. What You Are Expected to Do: ThoughtSpot environment maintenance – Clean-up, Coordination with TS & Starburst teams, Tracking the system health. Creation of Run books, SOPs, migration documents. Maintaining the connections & the other Admin objects (Tables, Views, etc.). Assisting in Environment migrations & peer review. AD Groups SyncUP Python script enhancements – Script split-up, Nested AD Group SyncUP and other improvements. Preferred Skills: Experience with other BI tools (e.g., Tableau, Power BI). Knowledge of Python for data analysis.
Bengaluru
INR 22.5 - 30.0 Lacs P.A.
Hybrid
Full Time
Role: Project Manager (Tech Java + Azure PM) Location: Bangalore Duration: Full-time Mode: Hybrid Experience: 10+ years Job Description: Looking for an experienced Java Technical Manager who has lead team of 25-30 to deliver high quality software products in a faced paced environment. Required Experience: Basic Expectations for this role Good Knowledge of Overall US retirement industry and Defined Contribution plan administration is a must. Strong Practical experience using Scrum, Agile modelling and adaptive software development is needed. Experience with tools like Jira which is used for Collaboration, project tracking and management Technical Skills: Understanding of Core Java, Micro services , REST, Test Driven Design, Oracle/PostgreSQL, SQL,PL/SQL, GIT, CI/CD pipelines, JUnit, Sonar, Jenkins, Angular, Infrastructure MS Azure knowledge and experience of services provided by the platform like Azure App Service, Azure SQL Database, Azure Storage, Azure Active Directory, Azure Security Center, Encryption, Azure DevOps etc. Experience with Collaboration, project tracking and management tool like Jira . Should gave experience in overseeing and reviewing Design and Architecture of applications in Hybrid cloud Environment. PMP certification Preferred Responsibilities Oversees systems design and implementation of complex design components. Creates project plans and deliverables and monitors task deadlines. Oversees s progress of the project and highlight risks at appropriate time. Guides and mentors the team Evaluates performance of the team. Excellent communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Foster a culture of continuous improvement and innovation within the team.
Hyderabad, Chennai, Bengaluru
INR 5.0 - 12.0 Lacs P.A.
Work from Office
Full Time
Job Summary: We are seeking a skilled Tosca Automation Test Engineer to join our QA team. The successful candidate will be responsible for developing, maintaining, and executing automated test scripts using the Tricentis Tosca testing tool. This role requires a strong understanding of test automation strategies, excellent scripting skills, and the ability to collaborate with cross-functional teams to ensure software quality and performance. Key Responsibilities: Design, develop, and maintain automated test cases using Tricentis Tosca . Analyze business requirements and translate them into effective test cases. Execute test cases, log defects, and track resolution. Develop test data and perform test execution for various test phases (functional, regression, integration, and end-to-end). Collaborate with developers, business analysts, and other stakeholders to ensure high-quality deliverables. Maintain and enhance existing test frameworks and strategies. Participate in Agile/Scrum ceremonies and contribute to sprint planning and reviews. Provide reports on test execution and defect status. Required Skills & Qualifications: Bachelors degree in Computer Science, Information Technology, or related field. 5+ years of experience in QA automation with using Tosca Testsuite . Strong knowledge of Tosca modules , test case design, test step libraries, and test data management. Experience in integrating Tosca with CI/CD pipelines (e.g., Jenkins, Azure DevOps). Solid understanding of software testing life cycle (STLC) and QA best practices. Familiarity with API testing, service virtualization, and test management tools. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities.
Hyderabad
INR 12.0 - 18.0 Lacs P.A.
Remote
Full Time
Role: Senior Data Engineer Azure/Snowflake Duration: 6+ Months Location: Remote Working Hours: 12:30pm IST - 9:30pm IST (3am - 12pm EST) Job Summary: We are seeking a Senior Data Engineer with advanced hands-on experience in Snowflake and Azure to support the development and optimization of enterprise-grade data pipelines. This role is ideal for someone who enjoys deep technical work and solving complex data engineering challenges in a modern cloud environment. Key Responsibilities: Build and enhance scalable data pipelines using Azure Data Factory, Snowflake, and Azure Data Lake Develop and maintain ELT processes to ingest and transform data from various structured and semi-structured sources Write optimized and reusable SQL for complex data transformations in Snowflake Collaborate closely with analytics teams to ensure clean, reliable data delivery Monitor and troubleshoot pipeline performance, data quality, and reliability Participate in code reviews and contribute to best practices around data engineering standards and governance Qualifications: 5+ years of data engineering experience in enterprise environments Deep hands-on experience with Snowflake, Azure Data Factory, Azure Blob/Data Lake, and SQL Proficient in scripting for data workflows (Python or similar) Strong grasp of data warehousing concepts and ELT development best practices Experience with version control tools (e.g., Git) and CI/CD processes for data pipelines Detail-oriented with strong problem-solving skills and the ability to work independently
Hyderabad
INR 9.0 - 11.0 Lacs P.A.
Remote
Full Time
Role: Data Engineer (Azure, Snowflake) - Mid-Level Duration: 6+ Months Location: Remote Working Hours: 12:30pm IST - 9:30pm IST (3am - 12pm EST) Job Summary: We are looking for a Data Engineer with solid hands-on experience in Azure-based data pipelines and Snowflake to help build and scale data ingestion, transformation, and integration processes in a cloud-native environment. Key Responsibilities: Develop and maintain data pipelines using ADF, Snowflake, and Azure Storage Perform data integration from various sources including APIs, flat files, and databases Write clean, optimized SQL and support data modeling efforts in Snowflake Monitor and troubleshoot pipeline issues and data quality concerns Contribute to documentation and promote best practices across the team Qualifications: 3-5 years of experience in data engineering or related role Strong hands-on knowledge of Snowflake, Azure Data Factory, SQL, and Azure Data Lake Proficient in scripting (Python preferred) for data manipulation and automation Understanding of data warehousing concepts and ETL /ELT patterns Experience with Git, JIRA, and agile delivery environments is a plus Strong attention to detail and eagerness to learn in a collaborative team setting
Hyderabad
INR 9.0 - 11.0 Lacs P.A.
Remote
Full Time
Role: Data Engineer (ETL Processes, SSIS, AWS) Duration: Fulltime Location: Remote Working hours: 4:30am to 10:30am IST shift timings. Note: We need a ETL engineer for MS SQL Server Integration Service working in 4:30am to 10:30am IST shift timings. Roles & Responsibilities: Design, develop, and maintain ETL processes using SQL Server Integration Services (SSIS). Create and optimize complex SQL queries, stored procedures, and data transformation logic on Oracle and SQL Server databases. Build scalable and reliable data pipelines using AWS services (e.g., S3, Glue, Lambda, RDS, Redshift). Develop and maintain Linux shell scripts to automate data workflows and perform system-level tasks. Schedule, monitor, and troubleshoot batch jobs using tools like Control-M, AutoSys, or cron. Collaborate with stakeholders to understand data requirements and deliver high-quality integration solutions. Ensure data quality, consistency, and security across systems. Maintain detailed documentation of ETL processes, job flows, and technical specifications. Experience with job scheduling tools such as Control-M and/or AutoSys. Exposure to version control tools (e.g., Git) and CI/CD pipelines.
Hyderabad
INR 18.0 - 22.5 Lacs P.A.
Remote
Full Time
Roles: SQL Data Engineer - ETL, DBT & Snowflake Specialist Location: Remote Duration: 14+ Months Timings: 5:30pm IST 1:30am IST Note: Immediate Joiners Only Required Experience: Advanced SQL Proficiency Writing and optimizing complex queries, stored procedures, functions, and views. Experience with query performance tuning and database optimization. ETL/ELT Development Building, and maintaining ETL/ELT pipelines. Familiarity with ETL tools or processes and orchestration frameworks. Data Modeling Designing and implementing data models Understanding of dimensional modeling and normalization. Snowflake Expertise Hands-on experience with Snowflakes architecture and features Experience with Snowflake database, schema, procedures, functions. DBT (Data Build Tool) Building data models, transformations using DBT. Implementing DBT best practices including testing, documentation, and CI/CD integration. Programming and Automation Proficiency in Python is a plus. Experience with version control systems (e.g., Git, Azure DevOps). Experience with Agile methodologies and DevOps practices. Collaboration and Communication Working effectively with data analysts, and business stakeholders. Translating technical concepts into clear, actionable insights. Prior experience in a fast-paced, data-driven environment.
Hyderabad
INR 22.5 - 27.5 Lacs P.A.
Remote
Full Time
Role: Data Architect / Data Modeler - ETL, Snowflake, DBT Location: Remote Duration: 14+ Months Timings: 5:30pm IST to 1:30am IST Note: Looking for Immediate Joiners Job Summary: We are seeking a seasoned Data Architect / Modeler with deep expertise in Snowflake , DBT , and modern data architectures including Data Lake , Lakehouse , and Databricks platforms. The ideal candidate will be responsible for designing scalable, performant, and reliable data models and architectures that support analytics, reporting, and machine learning needs across the organization. Key Responsibilities: Architect and design data solutions using Snowflake , Databricks , and cloud-native lakehouse principles . Lead the implementation of data modeling best practices (star/snowflake schemas, dimensional models) using DBT . Build and maintain robust ETL/ELT pipelines supporting both batch and real-time data processing. Develop data governance and metadata management strategies to ensure high data quality and compliance. Define data architecture frameworks, standards, and principles for enterprise-wide adoption. Work closely with business stakeholders, data engineers, analysts, and platform teams to translate business needs into scalable data solutions. Provide guidance on data lake and data warehouse integration , helping bridge structured and unstructured data needs. Establish data lineage, documentation, and maintain architecture diagrams and data dictionaries. Stay up to date with industry trends and emerging technologies in cloud data platforms and recommend improvements. Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or data modeling roles. Strong experience with Snowflake including performance tuning, security, and architecture. Hands-on experience with DBT (Data Build Tool) for building and maintaining data transformation workflows. Deep understanding of Lakehouse Architecture , Data Lake implementations, and Databricks . Solid grasp of dimensional modeling , normalization/denormalization strategies, and data warehouse design principles. Experience with cloud platforms (e.g., AWS, Azure, or GCP) Proficiency in SQL and scripting languages (e.g., Python). Familiarity with data governance frameworks , data catalogs, and metadata management tools.
Hyderabad
INR 15.0 - 20.0 Lacs P.A.
Work from Office
Full Time
Role: Technical Project Manager Location: Gachibowli, Hyderabad Duration: Full time Timings: 5:30pm - 2:00am IST Note: Looking for Immediate Joiners only (15-30 Days Notice) Job Summary: We are seeking a Technical Project Manager with a strong data engineering background to lead and manage end-to-end delivery of data platform initiatives. The ideal candidate will have hands-on exposure to AWS, ETL pipelines, Snowflake, DBT , and must be adept at stakeholder communication, agile methodologies, and cross-functional coordination across engineering, data, and business teams. Key Responsibilities: Plan, execute, and deliver data engineering and cloud-based projects within scope, budget, and timeline. Work closely with data architects, engineers, and analysts to manage deliverables involving ETL pipelines , Snowflake data warehouse , and DBT models . Lead Agile/Scrum ceremonies sprint planning, backlog grooming, stand-ups, and retrospectives. Monitor and report project status, risks, and issues to stakeholders and leadership. Coordinate cross-functional teams across data, cloud infrastructure, and product teams . Ensure adherence to data governance, security , and compliance standards throughout the lifecycle. Manage third-party vendors or consultants as required for data platform implementations. Own project documentation including project charters, timelines, RACI matrix, risk registers, and post-implementation reviews. Required Skills & Qualifications: Bachelors degree in Computer Science, Engineering, Information Systems, or related field (Masters preferred). 8+ years in IT with 3-5 years as a Project Manager in data-focused environments. Hands-on understanding of: AWS services (e.g., S3, Glue, Lambda, Redshift) ETL/ELT frameworks and orchestration Snowflake Data Warehouse DBT (Data Build Tool) for data modeling Familiar with SQL, data pipelines , and data quality frameworks . Experience using project management tools like JIRA, Confluence, MS Project, Smartsheet. PMP, CSM, or SAFe certifications preferred. Excellent communication, presentation, and stakeholder management skills.
Hyderabad, Chennai, Bengaluru
INR 12.0 - 18.0 Lacs P.A.
Work from Office
Full Time
Role: Healthcare Integration Developer (IRIS & HL7/FHIR) Location: Hyderabad/Bangalore/Chennai Work Timings: 1:00 pm - 10pm IST Experience: 6 - 8+ years Required Skills: Must have strong software development experience in Java/Python/any Scripting (object scripts) Must have proficiency in CCDA, HL7, FHIR, JSON and XML messaging Must have worked on Clinical domain platforms and good understanding of clinical workflows. Must have proficiency on XSLT transformation & XPath mappings. Must have proficiency in using development tools like GIT, VS Code Must have experience in Integration protocols such as TCPIP/MLLP, SFTP, REST, SOAP Proactive and highly organized with strong time management and planning skills. Good to have strong knowledge in SQL Nice to have experience in InterSystems IRIS for health, ObjectScript . Nice to have GCP/GKE, Docker containers, DevOps exposure
Hyderabad
INR 5.0 - 12.0 Lacs P.A.
Work from Office
Full Time
Job Title: AI/ML Engineer GenAI & MLOps Experience: 2 to 5 Years Location: Hyderabad (Work From Office) Employment Type: Full-Time About the Role: We are looking for a passionate and skilled AI/ML Engineer with experience in Generative AI, Machine Learning Operations (MLOps), and core AI/ML development. You will play a key role in designing, developing, and deploying intelligent systems and scalable ML pipelines in a production environment. Key Responsibilities: Design and implement machine learning models, especially in NLP, computer vision, and generative AI use cases (LLMs, diffusion models, etc.) Fine-tune and deploy transformer-based models (e.g., BERT, GPT, LLaMA) using open-source and commercial frameworks. Build and automate ML pipelines using MLOps tools such as MLflow, Kubeflow, or SageMaker. Work with cross-functional teams to deploy models to production with CI/CD and monitoring. Manage datasets, labeling strategies, and data versioning using tools like DVC or Weights & Biases. Conduct experiments, model evaluation, and performance tuning. Collaborate with backend engineers to integrate AI models into applications or APIs. Required Skills & Qualifications: 2–5 years of hands-on experience in AI/ML model development and deployment Strong knowledge of Python and ML libraries like TensorFlow, PyTorch, scikit-learn Experience with GenAI frameworks (e.g., Hugging Face Transformers, LangChain, OpenAI API) Exposure to MLOps practices and tools: Docker, MLflow, Airflow, FastAPI, Kubernetes, etc. Familiarity with cloud platforms (AWS, GCP, or Azure) Understanding of LLM fine-tuning, embeddings, vector stores, and prompt engineering Bachelor's or Master’s degree in Computer Science, AI, Data Science, or related field Good to Have: Knowledge of RAG (Retrieval-Augmented Generation) Experience with secure model deployment (RBAC, endpoint auth) Contributions to open-source AI/ML projects
Hyderabad
INR 5.0 - 11.0 Lacs P.A.
Work from Office
Full Time
Role: Business Analyst with Healthcare Location: Hyderabad/Hybrid Duration: Full Time A business analyst in healthcare industry performs the following tasks: Conducts elicitation meetings to understand business requirements for the proposed system. Facilitates interviews as required to understand current procedures and identify potential areas for change. Communicating with internal and external stakeholders. Evaluate and design healthcare processes for improvement or implementation. Documents the detailed requirements specifications for the technical team to develop the application. Collaborates with the stakeholders to analyze, evaluate and implement changes in the system for better results Conducts solution evaluation to assess the value realization and designs improvements, if value is not realized Creates alternate strategies and plans for potential adoption, such as selecting a new EHR vendor or recommending technology to aid system interoperability.
Hyderabad
INR 8.0 - 18.0 Lacs P.A.
Work from Office
Full Time
Role: GCP Data Engineer Location: Hyderabad Duration: Full time Roles & Responsibilities: * Design, develop, and maintain scalable and reliable data pipelines using Apache Airflow to orchestrate complex workflows. * Utilize Google BigQuery for large-scale data warehousing, analysis, and querying of structured and semi-structured data. * Leverage the Google Cloud Platform (GCP) ecosystem, including services like Cloud Storage, Compute Engine, AI Platform, and Dataflow, to build and deploy data science solutions. * Develop, train, and deploy machine learning models to solve business problems such as forecasting, customer segmentation, and recommendation systems. * Write clean, efficient, and well-documented code in Python for data analysis, modeling, and automation. * Use Docker to containerize applications and create reproducible research environments, ensuring consistency across development, testing, and production. * Perform exploratory data analysis to identify trends, patterns, and anomalies, and effectively communicate findings to both technical and non-technical audiences. * Collaborate with data engineers to ensure data quality and integrity. * Stay current with the latest advancements in data science, machine learning, and big data technologies.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.