Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 years
0 Lacs
hyderabad, telangana, india
On-site
Why Ryan? Global Award-Winning Culture Flexible Work Environment Generous Paid Time Off World-Class Benefits and Compensation Rapid Growth Opportunities Company Sponsored Two-Way Transportation Exponential Career Growth You will be a critical contributor to an ambitious strategic initiative aimed at re envisioning a broad suite of enterprise level applications. You will ensure the quality, reliability, and performance of software delivered by the development team through a balanced blend of automated and manual testing. Beyond validating functionality, you will champion company standards, continuous improvement, and a culture of quality across the SDLC. You will collaborate with cross-functional teams, and ensure that software solutions align with user needs, business goals, and performance expectations. This role demands hands-on engineering capabilities, a high degree of ownership, and the ability to work independently as well as within a team. Key Responsibilities Test Strategy & Planning Develop, execute, and maintain comprehensive test plans covering functional, integration, regression, and performance scenarios. Translate business and technical requirements into meaningful manual and automated test cases. Provide actionable feedback to Business Analysts to ensure requirements are clear, testable, and traceable. Define and evolve the overall automation strategy aligned with enterprise standards and release cadence. Estimate, prioritize, and plan testing activities in collaboration with Product and Engineering leads. Automation Framework Development Design, implement, and maintain scalable automation frameworks Integrate tests into CI/CD pipelines and ensure fast feedback Work closely with DevOps to optimize pipeline execution, parallelization of test execution, and environment provisioning. Quality Metrics & Reporting Define KPIs (defect leakage, test pass rate, code coverage) and generate dashboards that guide release readiness decisions. Drive root‑cause analysis sessions for escaped defects and champion preventive actions. Maintain meticulous documentation for all testing activities to ensure auditability and knowledge retention. Collaboration & Execution Partner with application developers to troubleshoot issues and document defects. Work closely with product managers, UX designers, and business stakeholders. Contribute to project planning and estimation for features and enhancements. Lead technical discussions, mentor junior engineers, and support knowledge sharing. Own and improve automation frameworks (UI, API, and DB) to increase coverage and accelerate release cycles. Required Competencies Technical Strength – Deep knowledge of test automation patterns, frameworks, and performance engineering. Ability to debug complex issues across application layers. Solution Ownership – End‑to‑end accountability for test coverage, environment stability, and release certification. Collaboration & Influence – Clear, concise communication; ability to advocate for quality in technical forums and executive updates. Execution Excellence – Consistently delivers reliable, maintainable test suites under tight timelines. Continuous Improvement – Proactively introduces tooling, process enhancements, and emerging best practices. What You Bring 6–9 years of experience in software quality engineering with a focus on automation for web and API‑based applications. Proficiency in C#/.NET Core, test frameworks (Selenium/Playwright/SpecFlow), and API testing tools (Postman/Jmeter). Hands‑on experience with CI/CD, and Azure/AWS cloud services. Strong SQL skills and experience validating data across relational and NoSQL stores. Proven ability to design, implement, and scale automation frameworks in a collaborative, Agile environment. Excellent written and verbal communication skills; able to articulate complex quality risks and solutions. A proactive mindset with the ability to work independently and manage competing priorities Why Join Us? You’ll be part of a company where innovation meets real-world impact. We’re building something meaningful, and we want your expertise to help shape the future of our platform. Expect a collaborative environment, intelligent peers, and the opportunity to make technical and business decisions that matter.
Posted 1 day ago
9.0 years
0 Lacs
hyderabad, telangana, india
On-site
Why Ryan? Global Award-Winning Culture Flexible Work Environment Generous Paid Time Off World-Class Benefits and Compensation Rapid Growth Opportunities Company Sponsored Two-Way Transportation Exponential Career Growth You will be a critical contributor to an ambitious strategic initiative aimed at re envisioning a broad suite of enterprise level applications. You will ensure the quality, reliability, and performance of software delivered by the development team through a balanced blend of automated and manual testing. Beyond validating functionality, you will champion company standards, continuous improvement, and a culture of quality across the SDLC. You will collaborate with cross-functional teams, and ensure that software solutions align with user needs, business goals, and performance expectations. This role demands hands-on engineering capabilities, a high degree of ownership, and the ability to work independently as well as within a team. Key Responsibilities Test Strategy & Planning Develop, execute, and maintain comprehensive test plans covering functional, integration, regression, and performance scenarios. Translate business and technical requirements into meaningful manual and automated test cases. Provide actionable feedback to Business Analysts to ensure requirements are clear, testable, and traceable. Define and evolve the overall automation strategy aligned with enterprise standards and release cadence. Estimate, prioritize, and plan testing activities in collaboration with Product and Engineering leads. Automation Framework Development Design, implement, and maintain scalable automation frameworks Integrate tests into CI/CD pipelines and ensure fast feedback Work closely with DevOps to optimize pipeline execution, parallelization of test execution, and environment provisioning. Quality Metrics & Reporting Define KPIs (defect leakage, test pass rate, code coverage) and generate dashboards that guide release readiness decisions. Drive root‑cause analysis sessions for escaped defects and champion preventive actions. Maintain meticulous documentation for all testing activities to ensure auditability and knowledge retention. Collaboration & Execution Partner with application developers to troubleshoot issues and document defects. Work closely with product managers, UX designers, and business stakeholders. Contribute to project planning and estimation for features and enhancements. Lead technical discussions, mentor junior engineers, and support knowledge sharing. Own and improve automation frameworks (UI, API, and DB) to increase coverage and accelerate release cycles. Required Competencies Technical Strength – Deep knowledge of test automation patterns, frameworks, and performance engineering. Ability to debug complex issues across application layers. Solution Ownership – End‑to‑end accountability for test coverage, environment stability, and release certification. Collaboration & Influence – Clear, concise communication; ability to advocate for quality in technical forums and executive updates. Execution Excellence – Consistently delivers reliable, maintainable test suites under tight timelines. Continuous Improvement – Proactively introduces tooling, process enhancements, and emerging best practices. What You Bring 6–9 years of experience in software quality engineering with a focus on automation for web and API‑based applications. Proficiency in C#/.NET Core, test frameworks (Selenium/Playwright/SpecFlow), and API testing tools (Postman/Jmeter). Hands‑on experience with CI/CD, and Azure/AWS cloud services. Strong SQL skills and experience validating data across relational and NoSQL stores. Proven ability to design, implement, and scale automation frameworks in a collaborative, Agile environment. Excellent written and verbal communication skills; able to articulate complex quality risks and solutions. A proactive mindset with the ability to work independently and manage competing priorities Why Join Us? You’ll be part of a company where innovation meets real-world impact. We’re building something meaningful, and we want your expertise to help shape the future of our platform. Expect a collaborative environment, intelligent peers, and the opportunity to make technical and business decisions that matter.
Posted 1 day ago
6.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Role summary: Be an integral part of our enterprise-scale migration from Bitbucket to Github Enterprise Cloud (GHEC), design and roll out GitHub Actions based CI/CD, and establish secure, complaint, and observable build/release pipelines for a 300-developer organization in the healthcare domain. You will be the technical owner for source control strategy, build infrastructure, and release automation and an emphasis on reliability, speed, and HIPAA/SOC2 compliance. What you’ll do: Plan & execute the migration Inventory repos, pipelines, users, secrets, and integrations; define cutover strategy and rollback plans. Migrate code, issues and CI from Bitbucket to GHEC with minimal downtime, script repeatable migration runbooks. Normalize repository standards (branch naming, default branches, protection rules, CODEOWNERS, templates) Design CI/CD on GitHub Actions Architect multistage pipelines (build->test-->security scans-->artifact publish--->deploy) Implement reusable workflows, composite actions, and organization-level workflow templates. Set up self-hosted runners and autoscaling runner fleets(containerized/ephemeral) for Linux/windows/macOS as needed. Establish secret management via OIDC to cloud providers; remove long live credentials. Security & compliance for healthcare Enable GitHub Advanced security (Code scanning, Dependabot, secret scanning) Enforce SSO/SAML, branch protection, required checks, signed commits, and PR review policies. Implement policy-as-code (e.g., Open policy agent, repo/rule sets), change-management controls, and audit-ready logs. Ensure pipelines and artifacts are aligned with HIPAA, SOC2, GDPR and least privilege principles avoid PHI in logs. Build & release engineering Standardize build images, caching, and artifact storage; speed up CI with dependency caches and test parallelization. Create environment promotion flows (dev/stage/prod) with approvals and progressive delivery (canary/blue green) Integrate QA automation, performance tests, and SAST/DAST into pipelines. Observability & reliability Define and track DORA metrics (lead time, deployment frequency, MTTR, change failure rate) Add telemetry for pipeline duration, queue times, and flake rates; publish dashboards and SLAs for CI. Change management & enablement Drive communications, training, and documentation; run office hours and migration pilots. Partner with security, compliance, SRE, and product teams; Required Qualifications: 6+ years in Build/Release/DevOps/Platform Engineering; 2+ years leading large SCM/CI migrations. Proven previous experience of migrating code from Bitbucket to GitHub Enterprise cloud. Expert with Git, GitHub Enterprise Cloud and GitHub Actions at organization scale. Proven experience running self-hosted/ephemeral runners and tuning CI performance. Strong CI/CD for polyglot stacks (Java/Kotlin, .NET, Node, Python, mobile) Hands on with artifact registries (GitHub packages/Artifactory), Iac (Terraform), containers (Docker), and one major cloud (AWS/Azure/GCP) preferably Azure. Security background: branch protection, CODEOWNERS, signed artifacts, SBOMs, dependency governance, secrets handling (ODIC) Healthcare or other regulated industry experience; understanding of HIPAA controls and audit requirements. Excellent scripting (Bash/PowerShell) and one high level language (Python/Go) Bitbucket to GitHub migrations using enterprise importers; Jira/GitHub Projects integrations.
Posted 1 day ago
7.0 years
0 Lacs
bhopal, madhya pradesh, india
On-site
Location: Bhopal, MP, India Experience: 7+ years Key Responsibilities Design and build ETL pipelines in Azure Databricks (PySpark, Delta Lake) to load, clean, and deliver data across Bronze, Silver, and Gold layers. Implement Data Lakehouse Architecture on Azure Data Lake Gen2 with partitioning, schema management, and performance optimization. Develop data models (dimensional/star schema) for reporting in Synapse and Power BI. Integrate Databricks with Azure services – ADF, Key Vault, Event Hub, Synapse Analytics, Purview, and Logic Apps. Build and manage CI/CD pipelines in Azure DevOps (YAML, Git repos, pipelines). Optimize performance through cluster tuning, caching, Z-ordering, Delta optimization, and job parallelization. Ensure data security and compliance (row-level security, PII masking, GDPR/HIPAA, audit logging). Collaborate with data architects and analysts to translate business needs into technical solutions. Required Skills Strong experience in Azure Databricks (Python, PySpark, SQL). Proficiency with Delta Lake (ACID transactions, schema evolution, incremental loads). Hands-on with Azure ecosystem – Data Factory, ADLS Gen2, Key Vault, Event Hub, Synapse. Knowledge of data governance & lineage tools (Purview, Unity Catalog is a plus). Strong understanding of data warehouse design and star schema. Azure DevOps (YAML, Git repos, pipelines) experience. Good debugging skills for performance tuning & schema drift issues. Good to Have Experience with healthcare or financial data. Familiarity with FHIR, OMOP, OpenEHR (for healthcare projects). Exposure to AI/ML integration using Databricks ML runtime. Experience with Unity Catalog for governance across workspaces. What You Will Deliver Automated snapshot + incremental pipelines in Databricks. Delta Lake architecture with partitioning, Z-ordering, and schema drift handling. Metadata-driven ingestion framework (YAML configs). Power BI datasets connected to Gold layer. CI/CD pipelines for deployment across. Are you ready to take the lead in building scalable data solutions with Azure Databricks? Apply now!
Posted 5 days ago
9.0 years
0 Lacs
hyderabad, telangana, india
On-site
Why Ryan? Global Award-Winning Culture Flexible Work Environment Generous Paid Time Off World-Class Benefits and Compensation Rapid Growth Opportunities Company Sponsored Two-Way Transportation Exponential Career Growth You will be a critical contributor to an ambitious strategic initiative aimed at re envisioning a broad suite of enterprise level applications. You will ensure the quality, reliability, and performance of software delivered by the development team through a balanced blend of automated and manual testing. Beyond validating functionality, you will champion company standards, continuous improvement, and a culture of quality across the SDLC. You will collaborate with cross-functional teams, and ensure that software solutions align with user needs, business goals, and performance expectations. This role demands hands-on engineering capabilities, a high degree of ownership, and the ability to work independently as well as within a team. Key Responsibilities Test Strategy & Planning Develop, execute, and maintain comprehensive test plans covering functional, integration, regression, and performance scenarios. Translate business and technical requirements into meaningful manual and automated test cases. Provide actionable feedback to Business Analysts to ensure requirements are clear, testable, and traceable. Define and evolve the overall automation strategy aligned with enterprise standards and release cadence. Estimate, prioritize, and plan testing activities in collaboration with Product and Engineering leads. Automation Framework Development Design, implement, and maintain scalable automation frameworks Integrate tests into CI/CD pipelines and ensure fast feedback Work closely with DevOps to optimize pipeline execution, parallelization of test execution, and environment provisioning. Quality Metrics & Reporting Define KPIs (defect leakage, test pass rate, code coverage) and generate dashboards that guide release readiness decisions. Drive root‑cause analysis sessions for escaped defects and champion preventive actions. Maintain meticulous documentation for all testing activities to ensure auditability and knowledge retention. Collaboration & Execution Partner with application developers to troubleshoot issues and document defects. Work closely with product managers, UX designers, and business stakeholders. Contribute to project planning and estimation for features and enhancements. Lead technical discussions, mentor junior engineers, and support knowledge sharing. Own and improve automation frameworks (UI, API, and DB) to increase coverage and accelerate release cycles. Required Competencies Technical Strength – Deep knowledge of test automation patterns, frameworks, and performance engineering. Ability to debug complex issues across application layers. Solution Ownership – End‑to‑end accountability for test coverage, environment stability, and release certification. Collaboration & Influence – Clear, concise communication; ability to advocate for quality in technical forums and executive updates. Execution Excellence – Consistently delivers reliable, maintainable test suites under tight timelines. Continuous Improvement – Proactively introduces tooling, process enhancements, and emerging best practices. What You Bring 6–9 years of experience in software quality engineering with a focus on automation for web and API‑based applications. Proficiency in C#/.NET Core, test frameworks (Selenium/Playwright/SpecFlow), and API testing tools (Postman/Jmeter). Hands‑on experience with CI/CD, and Azure/AWS cloud services. Strong SQL skills and experience validating data across relational and NoSQL stores. Proven ability to design, implement, and scale automation frameworks in a collaborative, Agile environment. Excellent written and verbal communication skills; able to articulate complex quality risks and solutions. A proactive mindset with the ability to work independently and manage competing priorities Why Join Us? You’ll be part of a company where innovation meets real-world impact. We’re building something meaningful, and we want your expertise to help shape the future of our platform. Expect a collaborative environment, intelligent peers, and the opportunity to make technical and business decisions that matter.
Posted 5 days ago
9.0 years
0 Lacs
hyderabad, telangana, india
On-site
Why Ryan? Global Award-Winning Culture Flexible Work Environment Generous Paid Time Off World-Class Benefits and Compensation Rapid Growth Opportunities Company Sponsored Two-Way Transportation Exponential Career Growth You will be a critical contributor to an ambitious strategic initiative aimed at re envisioning a broad suite of enterprise level applications. You will ensure the quality, reliability, and performance of software delivered by the development team through a balanced blend of automated and manual testing. Beyond validating functionality, you will champion company standards, continuous improvement, and a culture of quality across the SDLC. You will collaborate with cross-functional teams, and ensure that software solutions align with user needs, business goals, and performance expectations. This role demands hands-on engineering capabilities, a high degree of ownership, and the ability to work independently as well as within a team. Key Responsibilities Test Strategy & Planning Develop, execute, and maintain comprehensive test plans covering functional, integration, regression, and performance scenarios. Translate business and technical requirements into meaningful manual and automated test cases. Provide actionable feedback to Business Analysts to ensure requirements are clear, testable, and traceable. Define and evolve the overall automation strategy aligned with enterprise standards and release cadence. Estimate, prioritize, and plan testing activities in collaboration with Product and Engineering leads. Automation Framework Development Design, implement, and maintain scalable automation frameworks Integrate tests into CI/CD pipelines and ensure fast feedback Work closely with DevOps to optimize pipeline execution, parallelization of test execution, and environment provisioning. Quality Metrics & Reporting Define KPIs (defect leakage, test pass rate, code coverage) and generate dashboards that guide release readiness decisions. Drive root‑cause analysis sessions for escaped defects and champion preventive actions. Maintain meticulous documentation for all testing activities to ensure auditability and knowledge retention. Collaboration & Execution Partner with application developers to troubleshoot issues and document defects. Work closely with product managers, UX designers, and business stakeholders. Contribute to project planning and estimation for features and enhancements. Lead technical discussions, mentor junior engineers, and support knowledge sharing. Own and improve automation frameworks (UI, API, and DB) to increase coverage and accelerate release cycles. Required Competencies Technical Strength – Deep knowledge of test automation patterns, frameworks, and performance engineering. Ability to debug complex issues across application layers. Solution Ownership – End‑to‑end accountability for test coverage, environment stability, and release certification. Collaboration & Influence – Clear, concise communication; ability to advocate for quality in technical forums and executive updates. Execution Excellence – Consistently delivers reliable, maintainable test suites under tight timelines. Continuous Improvement – Proactively introduces tooling, process enhancements, and emerging best practices. What You Bring 6–9 years of experience in software quality engineering with a focus on automation for web and API‑based applications. Proficiency in C#/.NET Core, test frameworks (Selenium/Playwright/SpecFlow), and API testing tools (Postman/Jmeter). Hands‑on experience with CI/CD, and Azure/AWS cloud services. Strong SQL skills and experience validating data across relational and NoSQL stores. Proven ability to design, implement, and scale automation frameworks in a collaborative, Agile environment. Excellent written and verbal communication skills; able to articulate complex quality risks and solutions. A proactive mindset with the ability to work independently and manage competing priorities Why Join Us? You’ll be part of a company where innovation meets real-world impact. We’re building something meaningful, and we want your expertise to help shape the future of our platform. Expect a collaborative environment, intelligent peers, and the opportunity to make technical and business decisions that matter.
Posted 6 days ago
4.0 years
18 - 36 Lacs
hyderābād
On-site
About Blaize Blaize is building a hybrid AI platform engineered to support edge-to-cloud intelligence at scale—delivering efficient, scalable AI designed for complex, multimodal workloads across industries. We serve critical infrastructure sectors including smart city, defense, retail, manufacturing, healthcare, and automotive. Our full-stack programmable processor architecture and low-code/no-code software platform enable real-time AI processing for high-performance computing at the network’s edge and in the data center. Blaize solutions deliver actionable insights with low power consumption, high efficiency, minimal size, and low cost. Headquartered in El Dorado Hills (CA), Blaize has over 200 employees worldwide, with teams in San Jose (CA) and Cary (NC), and subsidiaries in Hyderabad (India), Leeds and Kings Langley (UK), and Abu Dhabi (UAE). To learn more, visit www.blaize.com or follow us on LinkedIn at @blaizeinc. JOB DESCRIPTION: Write and maintain performant kernels for the Blaize HW. Write graph level compiler passes/ optimizations for the Blaize Graph Compiler/Optimizer. Lower operators from TVM/ONNX/Pytorch, other higher-level frameworks to the HW level binary. Work on exciting parallelization problems and find solutions to make complex operators work on the Blaize GSP. JOB RESPONSIBILITIES You will be responsible for understanding business needs and knowing how to create and manage the tools, and you will be responsible for conferring with users, studying system flow, data usage, and work processes following the software development lifecycle. You will be responsible for identifying, prioritizing and executing tasks in the software development life cycle. You will be responsible for performing verification testing. You will have to collaborate with the internal teams and vendors to fix and improve products. Ensuring the quality of our software releases through testing strategy of new features and changes. Understanding the requirements against sub-components and crucial features of upcoming Blaize SDK. Developing comprehensive test plans, and collaborating with the automation team to ensure proper regression test coverage To contribute in automation frameworks for the graph optimizer. EDUCATION AND EXPERIENCE BTech/MTech in CS having at-least 4-8 years of experience. At least 4+ years of experience in software development (experience levels can vary from company to company and sometimes in internal job postings; it can depend on the candidate’s qualifications and expertise in using the tools and programs.) One should have a strong knowledge of data structures, algorithms, and computer science fundamentals. The data structures and algorithms are the most important part for any software developer, and having a strong foundation in DSA concepts can take you to the top amongst the many developers. Should have a strong knowledge of coding and good problem-solving skills. REQUIRED KNOWLEDGE, SKILLS, AND ABILITIES Understanding of Computer architecture, graph processing and Familiarity with Assembly programming. Experience with traditional computer vision algorithms and image processing is preferred, should have Strong analytic and debugging skills Familiarity with AI/MLs is also beneficial. Knowledge of test Automation tools and regression setup. Hardware bringup experience. C/C++, Python, data structures, DNN, ML networks experience. Experience in GPU’s and knowledge of writing parallel kernels for GPU’s. Understanding of YOLO networks, LLM’s, etc is a plus. MANDATORY SKILLS C, C++, Data Structures, STL libraries, Inference performance, DNN library, Graph Optimization, ONNX, Pytorch, kernel library, YOLO, LLM. Blaize is an equal opportunity employer. We pride ourselves on having a diverse workforce and we do not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law. We respect the gender, gender identity and gender expression of our applicants and employees, and we honor requests for preferred pronouns. It is our policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
Posted 6 days ago
5.0 years
3 - 10 Lacs
india
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 5 years JobType: full-time We are seeking a Senior AI Systems Engineer who combines the mindset of a backend engineer with a deep understanding of AI/ML workflows. This role is perfect for someone who can bridge the gap between cutting-edge AI research and real-world, large-scale deployment—owning everything from data pipelines to APIs, orchestration, and monitoring. This is a hands-on engineering role , where you’ll architect and implement scalable AI systems that are robust, reproducible, and production-ready. What You’ll Do Architect Scalable AI Systems: Design and implement production-grade architectures with a strong emphasis on backend services, orchestration, and automation. Build End-to-End Pipelines: Develop modular pipelines for data ingestion, preprocessing, training, serving, and continuous monitoring. Develop APIs & Services: Build APIs, microservices, and backend logic to seamlessly integrate AI models into real-time applications. Operationalize AI: Collaborate with DevOps and infrastructure teams to deploy models across cloud, hybrid, and edge environments. Enable Reliability & Observability: Implement CI/CD, containerization, and monitoring tools to ensure robust and reproducible deployments. Optimize Performance: Apply profiling, parallelization, and hardware-aware optimizations for efficient training and inference. Mentor & Guide: Support junior engineers by sharing best practices in AI engineering and backend system design. What You’ll Bring Programming Expertise: Strong backend development experience in Python (bonus: Go, Rust, or Node.js). Frameworks & APIs: Hands-on with FastAPI, Flask, or gRPC for building high-performance services. AI Lifecycle Knowledge: Deep understanding of model development workflows—data processing → training → deployment → monitoring. Systems & Infrastructure: Strong grasp of distributed systems, Kubernetes, Docker, CI/CD pipelines, and real-time data processing. MLOps Tools: Experience with MLflow, DVC, Weights & Biases, or similar platforms for experiment tracking and reproducibility. Cloud & Containers: Comfort with Linux, containerized deployments, and major cloud providers (AWS, GCP, or Azure). Nice to Have Experience with computer vision models (YOLO, UNet, transformers). Exposure to streaming inference systems (Kafka, NVIDIA DeepStream). Hands-on with edge AI hardware (NVIDIA Jetson, Coral) and optimizations (TensorRT, ONNX). Experience in synthetic data generation or augmentation . Open-source contributions or research publications in AI/ML systems. Qualifications Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related field. 5+ years of software engineering experience, ideally in AI/ML-driven products. Demonstrated success in designing, building, and scaling production-ready AI systems. Key Skills Python Backend Engineering Machine Learning Artificial Intelligence TensorFlow PyTorch FastAPI Docker Kubernetes CI/CD MLflow Cloud Platforms
Posted 6 days ago
7.0 years
0 Lacs
hyderabad, telangana, india
On-site
We are seeking an experienced AI Architect to lead the design, development, and deployment of large-scale AI solutions. The ideal candidate will bridge the gap between business requirements and technical implementation, with deep expertise in generative AI and modern MLOps practices. Key Responsibilities AI Solution Design & Implementation Architect end-to-end AI systems leveraging large language models and generative AI technologies Design scalable, production-ready AI applications that meet business objectives and performance requirements Evaluate and integrate LLM APIs from leading providers (OpenAI, Anthropic Claude, Google Gemini, etc.) Establish best practices for prompt engineering, model selection, and AI system optimization Model Development & Fine-tuning Fine-tune open-source models (Llama, Mistral, etc.) for specific business use cases Implement custom training pipelines and evaluation frameworks Optimize model performance, latency, and cost for production environments Stay current with latest model architectures and fine-tuning techniques Infrastructure & Deployment Deploy and manage AI models at enterprise scale using containerization (Docker) and orchestration (Kubernetes) Build robust, scalable APIs using FastAPI and similar frameworks Design and implement MLOps pipelines for model versioning, monitoring, and continuous deployment Ensure high availability, security, and performance of AI systems in production Business & Technical Leadership Collaborate with stakeholders to understand business problems and translate them into technical requirements Provide technical guidance and mentorship to development teams Conduct feasibility assessments and technical due diligence for AI initiatives Create technical documentation, architectural diagrams, and implementation roadmaps Required Qualifications Experience 7+ years of experience in machine learning engineering or data science Proven track record of delivering large-scale ML solutions Technical Skills Expert-level proficiency with LLM APIs (OpenAI, Claude, Gemini, etc.) Hands-on experience fine-tuning transformer models (Llama, Mistral, etc.) Strong proficiency in FastAPI, Docker, and Kubernetes Experience with ML frameworks (PyTorch, TensorFlow, Hugging Face Transformers) Proficiency in Python and modern software development practices Experience with cloud platforms (AWS, GCP, or Azure) and their AI/ML services Core Competencies Strong understanding of transformer architectures, attention mechanisms, and modern NLP techniques Experience with MLOps tools and practices (model versioning, monitoring, CI/CD) Ability to translate complex business requirements into technical solutions Strong problem-solving skills and architectural thinking Preferred Qualifications Experience with vector databases and retrieval-augmented generation (RAG) systems Knowledge of distributed training and model parallelization techniques Experience with model quantization and optimization for edge deployment Familiarity with AI safety, alignment, and responsible AI practices Experience in specific domains (finance, healthcare, legal, etc.) Advanced degree in Computer Science, AI/ML, or related field
Posted 1 week ago
12.0 years
0 Lacs
hyderabad, telangana, india
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr AI Research Scientist: Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Experience with POCs on emerging and latest innovation in AI. Multimodal Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Design and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Ability to drive multiple teams and cross-collaborate to ensure the quality delivery. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Machine learning frameworks - PyTorch or TensorFlow. Deep Learning algorithms - CNN, RNN, LSTM, Transformers LLMs ( BERT, GPT, etc.) and NLP algorithms. Design experience for fine Tuning of Open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Scientific understanding - PEFT - LORA, QLORA, etc. Exposure to GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker In-depth conceptual understanding on emerging and latest innovation in AI. Stay current with AI trends - MCP, A2A protocol, ACP, etc. Preferred Technical & Functional Skills —Langgraph/CrewAI/Autogen —Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops —Ensure scalability and efficiency, handle data tasks, —Cloud computing experience- Azure/AWS/GCP —BigQuery/Synapse Key behavioral attributes/requirements —Ability to mentor Managers and Tech Leads —Ability to own project deliverables, not just individual tasks —Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr AI Research Scientist: Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Experience with POCs on emerging and latest innovation in AI. Multimodal Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Design and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Ability to drive multiple teams and cross-collaborate to ensure the quality delivery. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Machine learning frameworks - PyTorch or TensorFlow. Deep Learning algorithms - CNN, RNN, LSTM, Transformers LLMs ( BERT, GPT, etc.) and NLP algorithms. Design experience for fine Tuning of Open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Scientific understanding - PEFT - LORA, QLORA, etc. Exposure to GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker In-depth conceptual understanding on emerging and latest innovation in AI. Stay current with AI trends - MCP, A2A protocol, ACP, etc. Preferred Technical & Functional Skills — Langgraph/CrewAI/Autogen — Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops — Ensure scalability and efficiency, handle data tasks, — Cloud computing experience- Azure/AWS/GCP — BigQuery/Synapse Key behavioral attributes/requirements — Ability to mentor Managers and Tech Leads — Ability to own project deliverables, not just individual tasks — Understand business objectives and functions to support data needs QUALIFICATIONS This role is for you if you have the below Educational Qualifications Masters (MS by Research)/PhD or equivalent degree in Computer Science Preferences to research scholars from Tier 1 colleges- IITs, NITs, IISc, IIITs, ISIs, etc. Work Experience 12+ Years of experience with strong record of publications (at least 5) in top tier conferences and journals #KGS
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description Roles & responsibilities Here are some of the key responsibilities of AI Research Scientist: Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Develop POCs and Showcase it to the stakeholders. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Mentorship: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. Scientific understanding and In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Preferred Technical & Functional Skills —Ability to create detailed technical architecture with scalability in mind for the AI solutions. Ability to explore hyperscalers and provide comparative analysis across different tools. —Cloud computing experience, particularly with Google/AWS/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google/AWS/Azure ( BigQuery/Synapse) —Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Key behavioral attributes/requirements —Ability to mentor junior developers —Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS Responsibilities Roles & responsibilities Here are some of the key responsibilities of AI Research Scientist: Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Develop POCs and Showcase it to the stakeholders. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Mentorship: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. Scientific understanding and In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Qualifications This role is for you if you have the below Educational Qualifications B.Tech/Masters (MS by Research)/PhD or equivalent degree in Computer Science Preferences to candidates from Tier 1 Colleges as IITs, NITs, IIITs, IISc, Indian Statistical Institute, etc. Work Experience 3-5 Years of experience with strong record of publications (at least 4) in top tier conferences and journals
Posted 1 week ago
5.0 years
0 Lacs
india
On-site
This role is for one of the Weekday's clients Min Experience: 5 years JobType: full-time This role is ideal for someone who thinks like a backend engineer but speaks the language of AI — bridging the gap between advanced AI development and real-world deployment at scale. We are looking for a Senior AI Developer with strong backend engineering and architectural expertise to design, build, and scale production-grade AI systems. This is a hands-on, technical role that involves working across data pipelines, APIs, model serving, and monitoring — ensuring robustness, reproducibility, and automation throughout the AI lifecycle. Requirements Key Responsibilities Design and implement scalable AI architectures with focus on backend services, orchestration, and operationalization. Build modular pipelines for data preprocessing, model training, serving, and monitoring. Develop APIs, microservices, and backend logic for real-time AI model integration and inference. Collaborate with DevOps, data, and infrastructure teams to deploy AI models across cloud, hybrid, and edge environments. Apply best practices for CI/CD, containerization, and version control. Optimize performance with profiling, parallelization, and hardware-aware deployments (GPUs, Jetson, etc.). Ensure reproducibility and observability using tools like MLflow, Prometheus, and Grafana. Mentor junior engineers in scalable AI system design and engineering best practices. Must-Have Skills Strong backend programming in Python (bonus: Go, Rust). Experience with FastAPI, Flask, gRPC, or similar frameworks. Deep understanding of the AI lifecycle — data ingestion → training → deployment → monitoring. Proficiency with Docker, Kubernetes, and CI/CD pipelines. Knowledge of distributed systems, asynchronous processing, and real-time API patterns. Experience with MLflow, DVC, or Weights & Biases. Comfortable with Linux systems and containerized AI deployments. Nice to Have Exposure to computer vision (YOLO, UNet, transformers). Experience with streaming inference systems (e.g., NVIDIA DeepStream, Kafka). Hands-on with edge AI hardware (Jetson, Coral) and optimizations (ONNX, TensorRT). Familiarity with cloud platforms (AWS, GCP, Azure). Experience in synthetic data generation or augmentation. Open-source contributions or publications in AI/ML systems. Qualifications B.E./B.Tech/M.Tech in Computer Science, Software Engineering, or related field. 5+ years of software engineering experience, ideally in AI/ML product companies. Proven track record of designing, building, and deploying production-grade AI systems. Skills: Python Artificial Intelligence Machine Learning OpenCV TensorFlow Docker Node.js Express.js
Posted 1 week ago
0.0 - 3.0 years
0 - 0 Lacs
hyderabad, telangana
On-site
Test Automation Engineer - "(Mandatory experience in Playwright with Typescript )" Location: India (On-site- Fully Work from office - Gachibowli-Hyderabad based only) Company: Venkat Tech Global Solutions Private Limited Client: German Client & Healthcare domain Employment Type: Full Time About Us Venkat Tech Global Solutions Pvt. Ltd. is a leading technology services provider delivering high-quality solutions to global clients. We are hiring passionate Test Automation Engineers for our esteemed European client, to strengthen their Quality Assurance (QA) team and support the delivery of stable, reliable web applications. 2. Test Automation Engineer – Professional Key Responsibilities: Independently develop and maintain automated UI and API tests using TypeScript and framework Playwright Parameterize and modularize test code (page objects, helper functions). Integrate tests into CI/CD pipelines with reporting (JUnit, Allure, etc.). Optimize test data strategies (factories, fixtures, mocks). Ensure test coverage across resolutions and device types (mobile/tablet). Analyze and document errors/regressions, and collaborate on test strategies. Participate in code reviews and work closely with development teams. Requirements: Test automation experience with TypeScript and frameworks like Playwright, Cypress, Jest. Strong knowledge of web technologies (DOM, HTTP, API testing). Experience in REST API testing and authentication. Skilled in responsive and cross-device testing. Understanding of code coverage, test parallelization, and test design. Proficient with CI/CD processes and Git workflows. Analytical mindset and problem-solving abilities. Ability to understand and learn basic German language is an added advantage What We Offer Opportunity to work with a leading European client on advanced tech projects. Collaborative, growth-oriented environment. Training and upskilling opportunities. Innovation and growth Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹75,000.00 per month Benefits: Health insurance Provident Fund Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Experience: Playwright: 3 years (Required) Location: Hyderabad, Telangana (Required) Work Location: In person Speak with the employer +91 6381065099
Posted 1 week ago
10.0 years
0 Lacs
india
Remote
Role Build Engineer Experience: 8–10 years Location: Remote, India Responsibilities Build & Release Engineering Standardizing build images and caching strategies Creating environment promotion flows (dev → stage → prod) Integrating QA automation and security testing (SAST/DAST) Source Control Migration Projects Migrating from Bitbucket to GitHub Enterprise Cloud (GHEC) Planning cutover strategies, rollback plans, and scripting migration runbooks CI/CD Pipeline Design & Implementation Architecting multistage pipelines using GitHub Actions Implementing reusable workflows and composite actions Managing self-hosted runners (Linux/Windows/macOS) Security & Compliance in Regulated Environments Implementing GitHub Advanced Security features Enforcing SSO/SAML, signed commits, and PR policies Ensuring HIPAA, SOC2, GDPR compliance in build/release pipelines Observability & Reliability Tracking DORA metrics and pipeline telemetry Publishing dashboards and SLAs for CI performance Change Management & Enablement Leading training, documentation, and migration pilots Collaborating with security, SRE, and product teams Requirements Code migration experience with details (code base, customers, companies) Experience migrating repositories/TB of data from Bitbucket to GitHub Enterprise Cloud Vendor experience with public cloud providers (Azure preferable; AWS/GCP acceptable) Healthcare domain experience, emphasizing HIPAA/SOC2 compliance Leadership in SCM/CI migrations (2+ years leading large migrations) Programming in Python/Go Detailed project experience in build, release, and CI/CD Mandatory Skills GitHub Enterprise Cloud & GitHub Actions (organization-level expertise) Bitbucket to GitHub migration experience CI/CD for polyglot stacks (Java/Kotlin, .NET, Node, Python, mobile) Self-hosted/ephemeral runners setup and performance tuning Artifact registries (GitHub Packages, Artifactory) Infrastructure as Code (Terraform) Containers (Docker) Cloud platforms (preferably Azure; AWS/GCP acceptable) Security practices: CODEOWNERS, signed artifacts, SBOMs, OIDC for secrets Regulated industry experience (especially healthcare) Scripting: Bash/PowerShell Programming: Python or Go Jira/GitHub Projects integration Nice-to-Have Skills Open Policy Agent (OPA) or similar policy-as-code tools Autoscaling runner fleets (containerized/ephemeral) Progressive delivery strategies (canary, blue-green deployments) Test parallelization and dependency caching DORA metrics tracking and CI observability dashboards Experience with SOC2, GDPR beyond HIPAA Experience in other regulated industries (finance, government)
Posted 1 week ago
6.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Role summary: Be an integral part of our enterprise-scale migration from Bitbucket to Github Enterprise Cloud (GHEC), design and roll out GitHub Actions based CI/CD, and establish secure, complaint, and observable build/release pipelines for a 300-developer organization in the healthcare domain. You will be the technical owner for source control strategy, build infrastructure, and release automation and an emphasis on reliability, speed, and HIPAA/SOC2 compliance. What you’ll do: Plan & execute the migration Inventory repos, pipelines, users, secrets, and integrations; define cutover strategy and rollback plans. Migrate code, issues and CI from Bitbucket to GHEC with minimal downtime, script repeatable migration runbooks. Normalize repository standards (branch naming, default branches, protection rules, CODEOWNERS, templates) Design CI/CD on GitHub Actions Architect multistage pipelines (build->test-->security scans-->artifact publish--->deploy) Implement reusable workflows, composite actions, and organization-level workflow templates. Set up self-hosted runners and autoscaling runner fleets(containerized/ephemeral) for Linux/windows/macOS as needed. Establish secret management via OIDC to cloud providers; remove long live credentials. Security & compliance for healthcare Enable GitHub Advanced security (Code scanning, Dependabot, secret scanning) Enforce SSO/SAML, branch protection, required checks, signed commits, and PR review policies. Implement policy-as-code (e.g., Open policy agent, repo/rule sets), change-management controls, and audit-ready logs. Ensure pipelines and artifacts are aligned with HIPAA, SOC2, GDPR and least privilege principles avoid PHI in logs. Build & release engineering Standardize build images, caching, and artifact storage; speed up CI with dependency caches and test parallelization. Create environment promotion flows (dev/stage/prod) with approvals and progressive delivery (canary/blue green) Integrate QA automation, performance tests, and SAST/DAST into pipelines. Observability & reliability Define and track DORA metrics (lead time, deployment frequency, MTTR, change failure rate) Add telemetry for pipeline duration, queue times, and flake rates; publish dashboards and SLAs for CI. Change management & enablement Drive communications, training, and documentation; run office hours and migration pilots. Partner with security, compliance, SRE, and product teams; Required Qualifications: 6+ years in Build/Release/DevOps/Platform Engineering; 2+ years leading large SCM/CI migrations. Proven previous experience of migrating code from Bitbucket to GitHub Enterprise cloud. Expert with Git, GitHub Enterprise Cloud and GitHub Actions at organization scale. Proven experience running self-hosted/ephemeral runners and tuning CI performance. Strong CI/CD for polyglot stacks (Java/Kotlin, .NET, Node, Python, mobile) Hands on with artifact registries (GitHub packages/Artifactory), Iac (Terraform), containers (Docker), and one major cloud (AWS/Azure/GCP) preferably Azure. Security background: branch protection, CODEOWNERS, signed artifacts, SBOMs, dependency governance, secrets handling (ODIC) Healthcare or other regulated industry experience; understanding of HIPAA controls and audit requirements. Excellent scripting (Bash/PowerShell) and one high level language (Python/Go) Bitbucket to GitHub migrations using enterprise importers; Jira/GitHub Projects integrations.
Posted 1 week ago
160.0 years
0 Lacs
pune, maharashtra, india
Remote
Job Description The Analyst, Lead Quality Assurance develops and executes quality improvement roadmaps, acts as ambassador of quality assurance by creating robust test plan followed by developing automation framework, pipeline creations and execution of the script along with SQL script development, data validation, quality metric tracking, and implementation & maintenance of Quality Assurance best practices and testing standards across the business unit. What You Will Do: Create and execute test strategy and industry standard test plans for better quality and efficiency for Stewart products. Collect requirements to develop testing guideline and procedures for functional, regression, end-to-end testing through automated testing solutions. Assist functional leadership to help build the core competencies of the team members. Analyze business use cases to find gaps in current test coverage and design documents for further improvement. Able to prioritize, train and guide team and work effectively across teams. Independently determine most effective engineering solutions to meet business requirements. Analyze the applications, identify testing goals, and determine best possible automation testing tool and framework for the organization. Design, maintain, and enhance existing test automation framework, develop test automation APIs and scripts in POM framework, and maintain and enhance the current framework to support a continuous integration environment with an automated smoke and regression test suite. Document coding standards and perform Code Reviews periodically. Develop automation project proposal with Return Over Investment over time. Build CI/CD pipeline to enable automated test execution on each build in order to provide continuous feedback to the development team. Ensure high quality test and code coverage with custom coding, maintainability of scripts, reliability of equipment and tools, and overall robustness of testing efforts. Coordinate and collaborate work with the onshore/offshore external team members to automate test scripts, run the test suites, and analyze and visualize the findings. Enhance and achieve maximum possible test coverage for functional, automation, system, and regression testing. Perform webservice testing and develop automation scripts in SoapUI/Postman for Rest Assure APIs, and SQL Procedures. Mentors less experienced Automation Test Engineers. What You Will Need: Education and Experience Bachelor’s degree in a relevant field preferred. Three or more (3+) Years of Desktop application testing experience. Five or more (5+) Years of experience with leading a cross functional QA team. Eight or more (8+) years of QA experience with the combination of Functional and Automation testing. Must be experienced with Desktop based applications (both Manual and Automation testing). Experience with DevOps and CI/CD pipeline (Jenkins, ADO, Bamboo, Harness etc.). Hands-on experience with test automation technologies e.g., Cucumber (TDD/BDD), Selenium, Test Grid, Test parallelization, POM, TestNG, and Junit etc. Knowledge, Skills, And Abilities Able to lead a globally distributed team. Strong background in SDLC and Agile (Scrum) software development methodology. Solid background on one of the project repository management systems e.g. Azure DevOps, Jira etc. Solid background on repository & version controlling systems e.g. BitBucket, GitHub, Git bash. Working knowledge of the ABS Health, Safety, Quality & Environmental Management System Provides advice within area of expertise and may contribute to development of organization functional strategy. Develops creative solutions through conceptual and innovative thinking. Capable to demonstrate strong leadership skills with the ability to influence others. Reporting Relationships: Reports to Senior Manager, Quality Assurance and will have direct reports. About Us We set out more than 160 years ago to promote the security of life and property at sea and preserve the natural environment. Today, we remain true to our mission and continue to support organizations facing a rapidly evolving seascape of challenging regulations and new technologies. Through it all, we are anchored by a vision and mission that help our clients find clarity in uncertain times. ABS is a global leader in marine and offshore classification and other innovative safety, quality, and environmental services. We’re at the forefront of supporting the global energy transition at sea, the application of remote and autonomous marine systems, cutting-edge technical solutions, and many more exciting advancements. Our commitment to safety, reliability, and efficiency is ever-present, guiding our clients to safer and more efficient operations. Equal Opportunity ABS Bureau is committed to the equal employment opportunity of its employees and prohibits discrimination against any employee or qualified applicant based on race, color, creed, religion, national origin, sex, gender identity, age, disability, marital status, sexual orientation, citizenship status or veteran status, or other non-work-related characteristics that may be protected under the law of the Federal Government or specific state employment laws. Notice ABS and Affiliated Companies (ABS) will not pay a fee to any third-party agency without a valid ABS Master Service Agreement (MSA) authorized and signed by Human Resources. Any resume, CV, application, or other forms of candidate submission provided to any employee of ABS without a valid MSA on file will be considered property of ABS, and no fee will be paid. Other This job description is not intended, and should not be construed, to be an all-inclusive list of responsibilities, skills, efforts or working conditions associated with the job of the incumbent. It is intended to be an accurate reflection of the principal job elements essential for making a fair decision regarding the pay structure of the job. #ogjs
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
bengaluru
On-site
As a Data Engineer specializing in geospatial data, your primary responsibility is to design, build, and maintain data infrastructure and systems that handle geospatial information effectively. You will work closely with cross-functional teams, including data scientists, geospatial analysts, and software engineers, to ensure that geospatial data is collected, processed, stored, and analyzed efficiently and accurately. Key Responsibilities: Data Pipeline Development: Design and implement robust data pipelines to acquire, ingest, clean, transform, and process geospatial data from various sources such as satellites, aerial, drones, and geolocation services. Data Ingestion, Storage and Extraction: Develop data models and schemas tailored to geospatial data structures, ensuring optimal performance and scalability for storage and retrieval operations. Spatial Database Management: Manage geospatial databases, including both traditional relational databases (e.g., PostgreSQL with PostGIS extension) and NoSQL databases (e.g., MongoDB, Cassandra) to store and query spatial data efficiently. Geospatial Analysis Tools Integration: Integrate geospatial analysis tools and libraries (e.g., GDAL, GeoPandas, Fiona) into data processing pipelines and analytics workflows to perform spatial data analysis, visualization, and geoprocessing tasks. Geospatial Data Visualization: Collaborate with data visualization specialists to create interactive maps, dashboards, and visualizations that effectively communicate geospatial insights and patterns to stakeholders. (frontend related) Performance Optimization: Identify and address performance bottlenecks in data processing and storage systems, leveraging techniques such as indexing, partitioning, and parallelization to optimize geospatial data workflows. Data Quality Assurance: Implement data quality checks and validation procedures to ensure the accuracy, completeness, and consistency of geospatial data throughout the data lifecycle. Geospatial Data Governance: Establish data governance policies and standards specific to geospatial data, including metadata management, data privacy, and compliance with geospatial regulations and standards (e.g., INSPIRE, OGC). Collaboration and Communication: Collaborate with cross-functional teams to understand geospatial data requirements and provide technical expertise and support. Communicate findings, insights, and technical solutions effectively to both technical and non-technical stakeholders. Requirements:Must-have: Bachelor's or Master's degree in Computer Science or a related field. 3-5 years of experience working in the field and deploying the pipelines in production. Strong programming skills in languages such as Python, Java, or Scala, with experience in geospatial libraries and frameworks (e.g., Rasterio, Shapely). Experience with distributed computing frameworks (e.g., Apache Spark, Airflow) and cloud-based data platforms (e.g., AWS, Azure, Google Cloud Platform). Familiarity with geospatial data formats and standards (e.g., GeoJSON, Shapefile, KML) and geospatial data visualization tools (e.g., Mapbox, Leaflet, Tableau). Strong analytical and problem-solving skills, with the ability to work with large and complex geospatial datasets. Good-to-have: Proficiency in SQL and experience with geospatial extensions for relational databases (e.g., PostGIS). Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Nice to have experience with geospatial libraries such as Rasterio, Xarray, Geopandas, and GDAL. Nice to have Knowledge of distributed computing frameworks such as Dask. STAC, GeoParquet, Cloud native tools. Productionising Data Science code. The role of a Data Engineer for Geospatial Data is crucial in enabling organizations to leverage the power of geospatial information for various applications, including urban planning, environmental monitoring, transportation, agriculture, and emergency response. Benefits: Medical Health Cover for you and your family, including unlimited online doctor consultations Access to mental health experts for you and your family Dedicated allowances for learning and skill development Comprehensive leave policy with casual leaves, paid leaves, marriage leaves, bereavement leaves Twice a year appraisal Job Type: Full-time Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
bareilly, uttar pradesh, india
On-site
Excellent understanding of numerical methods, fluid mechanics, and CFD software development Experience with C++, Python, and FORTRAN with OOP skills Experience with CFD software testing Expertise in multiphase flows, phase change, thermodynamics, particle dynamics, melting and solidification Expertise in multiphase flows, sediment scour modeling, particle dynamics, and, large-scale river modeling Expertise in multiphase flows, reaction kinetics, thermodynamics, reactor, and chemical engineering processes Excellent paper publishing and communication skills Expertise in OpenFOAM Experience with AI especially deep learning using Keras, Pytorch or Tensorflow Understanding of high-performance computing, parallelization on CPUs and GPUs
Posted 2 weeks ago
0 years
4 - 7 Lacs
cochin
On-site
Location : Kochi Employment Type : Full Time Work Mode : Hybrid Experience : 3-6 yrs Job Code : BEO-5640 Posted Date : 28/08/2025 Job Description Responsibilities Roles & Responsibilities 1. Conduct original research on generative AI models, focusing on model architectures, training methods, fine-tuning, and evaluation strategies. 2. Build Proof of Concepts (POCs) with emerging AI innovations and assess their feasibility for production. 3. Design and experiment with multimodal generative models (text, images, audio, and other modalities). 4. Develop autonomous, agent-based AI systems (agentic AI) capable of adaptive decision-making. 5. Lead the design, training, fine-tuning, and deployment of generative AI systems on large datasets. 6. Optimize AI algorithms for efficiency, scalability, and computational performance using parallelization, distributed systems, and hardware acceleration. 7. Manage data preprocessing and feature engineering (cleaning, normalization, dimensionality reduction, feature selection). 8. Evaluate and validate models using industry-standard benchmarks; iterate to achieve target KPIs. 9. Provide technical leadership and mentorship to junior researchers and engineers. 10. Document research findings, model architectures, and experimental outcomes in technical reports and publications. 11. Stay updated with the latest advancements in NLP, DL, and generative AI, fostering a culture of innovation within the team. Mandatory Technical & Functional Skills Strong expertise in PyTorch or TensorFlow. Proficiency in deep learning architectures: CNN, RNN, LSTM, Transformers, and LLMs (BERT, GPT, etc.). Experience fine-tuning open-source LLMs (Hugging Face, LLaMA 3.1, BLOOM, Mistral AI, etc.). Hands-on knowledge of PEFT techniques (LoRA, QLoRA, etc.). Familiarity with emerging AI frameworks & protocols (MCP, A2A, ACP, etc.). Deployment experience with cloud AI platforms: GCP Vertex AI, Azure AI Foundry, or AWS SageMaker. Proven track record in building POCs for cutting-edge AI use cases. Desired Candidate Profile Preferred Technical & Functional Skills Experience with LangGraph, CrewAI, or Autogen for agent-based AI. Large-scale deployment of GenAI/ML projects with MLOps/LLMOps best practices. Experience handling scalable data pipelines (BigQuery, Synapse, etc.). Strong understanding of cloud computing architectures (Azure, AWS, GCP). Key Behavioral Attributes Strong ownership mindset: able to lead end-to-end project deliverables, not just tasks. Ability to align AI solutions with business objectives and data requirements. Excellent communication and collaboration skills for cross-functional projects.
Posted 2 weeks ago
0 years
0 Lacs
kochi, kerala, india
On-site
Responsibilities Roles & Responsibilities Conduct original research on generative AI models, focusing on model architectures, training methods, fine-tuning, and evaluation strategies. Build Proof of Concepts (POCs) with emerging AI innovations and assess their feasibility for production. Design and experiment with multimodal generative models (text, images, audio, and other modalities). Develop autonomous, agent-based AI systems (agentic AI) capable of adaptive decision-making. Lead the design, training, fine-tuning, and deployment of generative AI systems on large datasets. Optimize AI algorithms for efficiency, scalability, and computational performance using parallelization, distributed systems, and hardware acceleration. Manage data preprocessing and feature engineering (cleaning, normalization, dimensionality reduction, feature selection). Evaluate and validate models using industry-standard benchmarks; iterate to achieve target KPIs. Provide technical leadership and mentorship to junior researchers and engineers. Document research findings, model architectures, and experimental outcomes in technical reports and publications. Stay updated with the latest advancements in NLP, DL, and generative AI, fostering a culture of innovation within the team. Mandatory Technical & Functional Skills Strong expertise in PyTorch or TensorFlow. Proficiency in deep learning architectures: CNN, RNN, LSTM, Transformers, and LLMs (BERT, GPT, etc.). Experience fine-tuning open-source LLMs (Hugging Face, LLaMA 3.1, BLOOM, Mistral AI, etc.). Hands-on knowledge of PEFT techniques (LoRA, QLoRA, etc.). Familiarity with emerging AI frameworks & protocols (MCP, A2A, ACP, etc.). Deployment experience with cloud AI platforms: GCP Vertex AI, Azure AI Foundry, or AWS SageMaker. Proven track record in building POCs for cutting-edge AI use cases. Desired Candidate Profile Preferred Technical & Functional Skills Experience with LangGraph, CrewAI, or Autogen for agent-based AI. Large-scale deployment of GenAI/ML projects with MLOps/LLMOps best practices. Experience handling scalable data pipelines (BigQuery, Synapse, etc.). Strong understanding of cloud computing architectures (Azure, AWS, GCP). Key Behavioral Attributes Strong ownership mindset: able to lead end-to-end project deliverables, not just tasks. Ability to align AI solutions with business objectives and data requirements. Excellent communication and collaboration skills for cross-functional projects. Back
Posted 2 weeks ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
We are looking for Candidates who can join us within 30 days Location- Hyderabad Experience range-5+ years Notice period- 30 Days The ideal candidate will have the following attributes: 5+ Years’ experience in Software benchmarking and profiling on various H/w, Operating systems & workload scenarios. 2+ Years of Strong experience in the Python programming language Experience in designing test plans, test metrics, writing Tests and performing benchmarking with different permutation and combinations for Functional and performance analysis. Deep understanding of Operating systems, H/w platforms, CPUs, Cores, Parallelization, GPUs, Network performance parameters, Timings etc. Partner with Product release, SW verifications, Marketing & IT Infra teams. Strong communication and analytical skills. Demonstrates flexible adaptability in working with maturing, generation-dependent software development and testing methods. Executes independently to develop mock-ups or requirement prototypes for features of moderate to high complexity. Can effectively articulate these to relevant stakeholders. Education Requirements Bachelor’s degree or foreign equivalent in Computer Science, Computer Engineering, Electrical Engineering, or a related field. Years of Experience 5+ years of experience in Python scripting, automation coding, HW, SW product benchmarking, profiling & come up with comparative analysis. Job Skills Requirements: Must have work experience, coursework, project background experience in the following: Python Programming, Automations. Linux Shell Programming Linux Operating System / Command Line Interface Bash Scripting Benchmarking/Testing Methodologies
Posted 3 weeks ago
5.0 years
0 Lacs
india
On-site
Transforming the Future with the Convergence of Simulation and Data Lead Software Developer in Fluid Structure Interaction Do you like a challenge, are you a complex thinker who likes to solve problems? If so, then you might be the new Altairian we are searching for. At Altair, your curiosity matters. We pride ourselves on a business culture that enables open, creative thinking, and we deeply value our employees and their contributions towards our clients' success, as well as our own. Job Summary To reinforce our development team, Altair is looking for a talented developer, specialist of Fluid Structure Interaction (FSI), with a proven experience developing commercial software. You’ll bring your expertise and leadership to accelerate the development of the Multiphysics solution based on the Radioss solver. You master numerical methods such as Arbitrary Lagrangian Eulerian (ALE) and Finite Volume Method (FVM), dealing with high speed loading events and high energy phenomenon, such as shocks and compressible flows. Knowledge of discrete methods (SPH, DEM) will be also appreciated. You will have the chance to join a multicultural team of international developers & experts, with real career evolution opportunities. What You Will Do Actively contribute to software developments, taking leadership on FSI developments, bringing enhancements and innovations Collaborate with all stakeholders to shape overall code architecture Participate to software maintenance and documentation Basics What You Will Need: PhD or Master degree in computational solid mechanics and/or fluid dynamics 5 to 10 years of experience developing in a commercial FEA software Leadership skills creating a positive, innovative, and productive collaborative team Programming skills: Fortran90; C++; Python appreciated Fluent in English; French appreciated Preferred Expert of Arbitrary Lagrangian Eulerian (ALE), including multi-material and thermal, Finite Volume Method (FVM), under explicit time integration scheme Experience with discrete methods (SPH,DEM) and contact mechanics appreciated Some knowledge on code parallelization: MPI, OpenMP, GPU Program management tools: Git, JIRA & Agile method How You Will Be Successful Envision the Future Communicate Honestly and Broadly Seek Technology and Business “Firsts” Embrace Diversity and Take Risks What We Offer Competitive Salary Comprehensive Benefit Package Outstanding Work/Life Balance Flex Time Why Work With Us Altair is a global technology company providing software and cloud solutions in the areas of product development, high-performance computing (HPC) and artificial intelligence (AI). Altair enables organizations in nearly every industry to compete more effectively in a connected world, while creating a more sustainable future. With more than 3,000 engineers, scientists, and creative thinkers in 25 countries, we help solve our customer’s toughest challenges and deliver unparalleled service, helping the innovators innovate, drive better decisions, and turn today’s problems into tomorrow’s opportunities. Our vision is to transform customer decision making with data analytics, simulation, and high-performance computing and artificial intelligence (AI). For more than 30 years, we have been helping our customers integrate electronics and controls with mechanical design to expand product value, develop AI, simulation and data-driven digital twins to drive better decisions, and deliver advanced HPC and cloud solutions to support unlimited idea exploration. To learn more, please visit altair.com . Ready to go? #ONLYFORWARD At our core we are explorers; adventurers; pioneers. We are the brains behind some of the world’s most revolutionary innovations and are not only comfortable in new and uncharted waters, we dive in headfirst. We are the original trailblazers that make the impossible possible, discovering new solutions to our customer’s toughest challenges. Altair is an equal opportunity employer. Our backgrounds are diverse, and every member of our global team is critical to our success. Altair's history demonstrates a belief that empowering each individual authentic voice reinforces a culture that thrives because of the uniqueness among our team.
Posted 3 weeks ago
12.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Company Description Sandisk understands how people and businesses consume data and we relentlessly innovate to deliver solutions that enable today’s needs and tomorrow’s next big ideas. With a rich history of groundbreaking innovations in Flash and advanced memory technologies, our solutions have become the beating heart of the digital world we’re living in and that we have the power to shape. Sandisk meets people and businesses at the intersection of their aspirations and the moment, enabling them to keep moving and pushing possibility forward. We do this through the balance of our powerhouse manufacturing capabilities and our industry-leading portfolio of products that are recognized globally for innovation, performance and quality. Sandisk has two facilities recognized by the World Economic Forum as part of the Global Lighthouse Network for advanced 4IR innovations. These facilities were also recognized as Sustainability Lighthouses for breakthroughs in efficient operations. With our global reach, we ensure the global supply chain has access to the Flash memory it needs to keep our world moving forward. Job Description Good working experience in Python, C/C++, Shell/Bash and other scripting languages Experience in developing CI/CD using Jenkins, Git or other SCM tools Work experience in Selenium Web automation, django, dashboards, database management and related web development platforms Working experience in integrating Jira and Confluence Automation of ASIC Development Flows & Pre-Silicon Platforms development for productivity improvement & quick delivery to FW, ASIC Validation & ASIC Verification/DFT Customers. Knowledge in Devops and work experience interacting with IT team for developing, integrating and deploying test automation infrastructure Experience in Jenkins job parallelization, manage virtual machines and efficiently utilize test nodes and optimize build resources Experience in working/integrating microcontroller/microprocessor/FPGA boards, Hardware tools, oscilloscopes, UART/SPI/I2C devices, USB bridges Ideal candidate possess a strong foundation in digital design, FPGA hardware, and software development, with experience in automation tools and infrastructure management Candidate with knowledge in Firmware/Embedded/VLSI development environment or experience in building automation framework for similar background will be given more preference. Knowledge in Protocol Analyzers and Measurement equipment is added plus Develop and maintain Build and Test Automation framework for Pre-Silicon and Post-Silicon development environment as part of ASIC Development Engineering Work with ASIC Flows in Pre-Silicon Phase & Integrate Automation & AI Capabilities for improving Productivity. Integrate tools and monitors Provide support for DevOps methodology and tools, such as Jenkins, Git, bit-bucket, etc. Work with development team to build CI/CD pipelines, enable self-service build tools and reusable deployment jobs Qualifications BS in Electrical Engineering or Computer Engineering with 12-15 years of experience in Test Automation Framework Development 12-15 years of experience in Test Automation Framework Development, Continuous Integration/Continuous Delivery Process preferably in VLSI/Firmware/Embedded environments Experience working on VLSI/Firmware/Embedded environments 7-10 Years working experience on Python, C/C++, Shell/Bash and other scripting languages. Knowledge on ASIC Flows/Pre-Silicon Platforms Development Flows Knowledge on ASIC Tools, Pre-Silicon Platform Tools is added plus Additional Information Sandisk thrives on the power and potential of diversity. As a global company, we believe the most effective way to embrace the diversity of our customers and communities is to mirror it from within. We believe the fusion of various perspectives results in the best outcomes for our employees, our company, our customers, and the world around us. We are committed to an inclusive environment where every individual can thrive through a sense of belonging, respect and contribution. Sandisk is committed to offering opportunities to applicants with disabilities and ensuring all candidates can successfully navigate our careers website and our hiring process. Please contact us at jobs.accommodations@wdc.com to advise us of your accommodation request. In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying.
Posted 3 weeks ago
50.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: Cypress Testing Experience Required: 8-10yrs Notice: immediate Work Location: Pune Mode Of Work: Hybrid Type of Hiring: Contract to Hire JD : Technology Skill Set: • Cypress with JavaScript/TypeScript • Cucumber for BDD-style test definitions • GitHub Actions for CI/CD integration • GraphQL for API-level testing and validation Key Outcomes: • Implementation of complex data validation tests aligned with business rules • Optimization of test execution time through parallelization and efficient test design
Posted 4 weeks ago
3.0 - 5.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
As a Data Engineer specializing in geospatial data, your primary responsibility is to design, build, and maintain data infrastructure and systems that handle geospatial information effectively. You will work closely with cross-functional teams, including data scientists, geospatial analysts, and software engineers, to ensure that geospatial data is collected, processed, stored, and analyzed efficiently and accurately. About SatSure SatSure is a deep tech, decision Intelligence company that works primarily at the nexus of agriculture, infrastructure, and climate action creating an impact for the other millions, focusing on the developing world. We want to make insights from earth observation data accessible to all. If you are interested in working in an environment that focuses on the impact on society, driven by cutting-edge technology, and where you will be free to work on innovative ideas and be creative with no hierarchies, SatSure is the place for you. Key Responsibilities: Data Pipeline Development: Design and implement robust data pipelines to acquire, ingest, clean, transform, and process geospatial data from various sources such as satellites, aerial, drones, and geolocation services. Data Ingestion, Storage and Extraction: Develop data models and schemas tailored to geospatial data structures, ensuring optimal performance and scalability for storage and retrieval operations. Spatial Database Management: Manage geospatial databases, including both traditional relational databases (e.g., PostgreSQL with PostGIS extension) and NoSQL databases (e.g., MongoDB, Cassandra) to store and query spatial data efficiently. Geospatial Analysis Tools Integration: Integrate geospatial analysis tools and libraries (e.g., GDAL, GeoPandas, Fiona) into data processing pipelines and analytics workflows to perform spatial data analysis, visualization, and geoprocessing tasks. Geospatial Data Visualization: Collaborate with data visualization specialists to create interactive maps, dashboards, and visualizations that effectively communicate geospatial insights and patterns to stakeholders. (frontend related) Performance Optimization: Identify and address performance bottlenecks in data processing and storage systems, leveraging techniques such as indexing, partitioning, and parallelization to optimize geospatial data workflows. Data Quality Assurance: Implement data quality checks and validation procedures to ensure the accuracy, completeness, and consistency of geospatial data throughout the data lifecycle. Geospatial Data Governance: Establish data governance policies and standards specific to geospatial data, including metadata management, data privacy, and compliance with geospatial regulations and standards (e.g., INSPIRE, OGC). Collaboration and Communication: Collaborate with cross-functional teams to understand geospatial data requirements and provide technical expertise and support. Communicate findings, insights, and technical solutions effectively to both technical and non-technical stakeholders. Requirements: Must-have: Bachelor's or Master's degree in Computer Science or a related field. 3-5 years of experience working in the field and deploying the pipelines in production. Strong programming skills in languages such as Python, Java, or Scala, with experience in geospatial libraries and frameworks (e.g., Rasterio, Shapely). Experience with distributed computing frameworks (e.g., Apache Spark, Airflow) and cloud-based data platforms (e.g., AWS, Azure, Google Cloud Platform). Familiarity with geospatial data formats and standards (e.g., GeoJSON, Shapefile, KML) and geospatial data visualization tools (e.g., Mapbox, Leaflet, Tableau). Strong analytical and problem-solving skills, with the ability to work with large and complex geospatial datasets. Good-to-have: Proficiency in SQL and experience with geospatial extensions for relational databases (e.g., PostGIS). Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Nice to have experience with geospatial libraries such as Rasterio, Xarray, Geopandas, and GDAL. Nice to have Knowledge of distributed computing frameworks such as Dask. STAC, GeoParquet, Cloud native tools. Productionising Data Science code. The role of a Data Engineer for Geospatial Data is crucial in enabling organizations to leverage the power of geospatial information for various applications, including urban planning, environmental monitoring, transportation, agriculture, and emergency response. Benefits: Medical Health Cover for you and your family, including unlimited online doctor consultations Access to mental health experts for you and your family Dedicated allowances for learning and skill development Comprehensive leave policy with casual leaves, paid leaves, marriage leaves, bereavement leaves Twice a year appraisal Interview Process: Intro call Assessment Presentation Interview rounds (ideally up to 3-4 rounds) Culture Round / HR round
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |