Job Overview: We are seeking a talented R Analytics Support to join our analytics team. The ideal candidate will possess a strong background in data analysis, statistical modeling, and proficiency in the R programming language. You will be responsible for analyzing complex datasets, providing insights, and developing statistical models to support business decisions. Key Responsibilities:Utilize R programming to analyze large and complex datasets, performing data cleaning, transformation, and analysis.Develop and implement statistical models (regression, time series, classification, etc.) to provide actionable insights.Conduct exploratory data analysis (EDA) to identify trends, patterns, and anomalies.Visualize data through plots, charts, and dashboards to effectively communicate results to stakeholders.Collaborate with cross-functional teams to define business problems and develop analytical solutions.Build and maintain R scripts and automation workflows for repetitive tasks and analysis.Stay updated with the latest developments in R packages and data science techniques.Present findings and insights to stakeholders through clear, concise reports and presentations.Provide technical support and guidance to data analysts and scientists on R-related issues, Troubleshoot and resolve R code errors and performance issues, Develop and maintain R packages and scripts to support data analysis and reporting and collaborate with data analysts and scientists to design and implement data visualizations and reports. Qualifications:Bachelor’s/Master’s degree in Statistics, Mathematics, Data Science, Computer Science, or a related field.at least the last 3-5 years in a senior role specifically focusing on R Language, R Studio, and SQL.Strong knowledge of statistical techniques (regression, clustering, hypothesis testing, etc.).Experience with data visualization tools like ggplot2, shiny, or plotly.Familiarity with SQL and database management systems.Knowledge of machine learning algorithms and their implementation in R.Ability to interpret complex data and communicate insights clearly to non-technical stakeholders.Strong problem-solving skills and attention to detail.Familiarity with version control tools like Git is a plus.
Job Title: Sr. Product AI Engineer - Front End Software Developer Experience : 4–8 years Location : Pune Type : Full-time About the Role We’re looking for a highly motivated Sr. Product AI Engineer- Frontend Developer with proficiency in building AI based desktop apps using TypeScript and frameworks like Electron, Node.js or Tauri . You will lead the development of scalable and secure user interfaces, work on local API integrations, and optimize performance for cross-platform environments. Key Responsibilities • Develop user-friendly and efficient desktop UI for Windows and macOS. • Implement and consume local/offline APIs using REST/WebSocket protocols. • Integrate AI model workflows into the UI (offline/local deployment). • Ensure security compliance in application design and data handling. • Package and deploy desktop apps using cross-platform build tools. • Optimize app performance for speed and responsiveness. • Collaborate closely with backend, ML, and DevOps teams. • Be open to working flexible or extended hours during high-priority phases. Required Skills • TypeScript – Expert in scalable UI/application logic. • Electron or Tauri – Hands-on experience with desktop app frameworks. • Node.js – Understanding of backend service integration. • REST/WebSocket – Ability to build and consume APIs for local data exchange. • Secure Coding – Knowledge of privacy-first and secure app design. • Linux – Comfortable with Linux-based dev and deployment environments. Nice-to-Have Skills • Familiarity with AI/ML model APIs (local or hosted). • Knowledge of Redis or SQLite for lightweight data storage. • Experience in plugin/module system architecture. • Skills in cross-platform build automation (e.g., electron-builder, pkg). • Experience working in air-gapped or security-restricted environments. Ideal Candidate Traits • Curious and proactive — thrives in fast-moving, collaborative teams. • Strong sense of ownership and accountability. • Demonstrates a growth mindset and embraces continuous learning. • Clear communicator, especially in cross-functional settings Show more Show less
Job Title: Data Modeler / Data Analyst Experience : 6 – 8 Years Location : Pune Job Summary We are looking for a seasoned Data Modeler / Data Analyst to design and implement scalable, reusable logical and physical data models on Google Cloud Platform—primarily BigQuery. You will partner closely with data engineers, analytics teams, and business stakeholders to translate complex business requirements into performant data models that power reporting, self-service analytics, and advanced data science workloads. Key Responsibilities · Gather and analyze business requirements to translate them into conceptual, logical, and physical data models on GCP (BigQuery, Cloud SQL, Cloud Spanner, etc.). · Design star/snowflake schemas, data vaults, and other modeling patterns that balance performance, flexibility, and cost. · Implement partitioning, clustering, and materialized views in BigQuery to optimize query performance and cost efficiency. · Establish and maintain data modelling standards, naming conventions, and metadata documentation to ensure consistency across analytic and reporting layers. · Collaborate with data engineers to define ETL/ELT pipelines and ensure data models align with ingestion and transformation strategies (Dataflow, Cloud Composer, Dataproc, dbt). · Validate data quality and lineage; work with BI developers and analysts to troubleshoot performance issues or data anomalies. · Conduct impact assessments for schema changes and guide version-control processes for data models. · Mentor junior analysts/engineers on data modeling best practices and participate in code/design reviews. · Contribute to capacity planning and cost-optimization recommendations for BigQuery datasets and reservations. Must-Have Skills · 6-8 years of hands-on experience in data modeling, data warehousing, or database design, including at least 2 years on GCP BigQuery. · Proficiency in dimensional modeling, 3NF, and modern patterns such as data vault. · Expert SQL skills with demonstrable ability to optimize complex analytical queries on BigQuery (partitioning, clustering, sharding strategies). · Strong understanding of ETL/ELT concepts and experience working with tools such as Dataflow, Cloud Composer, or dbt. · Familiarity with BI/reporting tools (Looker, Tableau, Power BI, or similar) and how model design impacts dashboard performance. · Experience with data governance practices—data cataloging, lineage, and metadata management (e.g., Data Catalog). · Excellent communication skills to translate technical concepts into business-friendly language and collaborate across functions. Good to Have · Experience of working on Azure Cloud (Fabric, Synapse, Delta Lake) Education · Bachelor’s or master’s degree in computer science, Information Systems, Engineering, Statistics, or a related field. Equivalent experience will be considered. Show more Show less
Job Title: Senior Technical Delivery Manager – ETL, Datawarehouse and Analytics Experience : 15 plus years in IT delivery management, with at least 7 years in Big Data, Cloud, and Analytics. Experience should span across ETL, Data Management, Data Visualization and Project Management Location : Mumbai, India Department : Big Data and Cloud – DATA ANALYTICS DELIVERY Company: Smartavya Analytica Private limited is niche Data and AI company. Based in Pune, we are pioneers in data-driven innovation, transforming enterprise data into strategic insights. Established in 2017, our team has experience in handling large datasets up to 20 PB’s in a single implementation, delivering many successful data and AI projects across major industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are leaders in Big Data, Cloud and Analytics projects with super specialisation in very very large Data Platforms. https://smart-analytica.com Empowering Your Digital Transformation with Data Modernization and AI Job Overview : Smartavya Analytica Private Limited is seeking an experienced Senior Delivery Manager to oversee and drive the successful delivery of large-scale Big Data, Cloud, and Analytics projects. The ideal candidate will have a strong background in IT delivery management, excellent leadership skills, and a proven record in managing complex projects from initiation to completion. The ideal candidate should have the right blend of Client Engagement, Project Delivery and Data Management Skills Key Responsibilities : • Technical Project Management: o Lead the end-to-end technical delivery of multiple projects in Big Data, Cloud, and Analytics. Lead teams in technical solutioning, design and development o Develop detailed project plans, timelines, and budgets, ensuring alignment with client expectations and business goals. o Monitor project progress, manage risks, and implement corrective actions as needed to ensure timely and quality delivery. • Client Engagement and Stakeholder Management: o Build and maintain strong client relationships, acting as the primary point of contact for project delivery. o Understand client requirements, anticipate challenges, and provide proactive solutions. o Coordinate with internal and external stakeholders to ensure seamless project execution. o Communicate project status, risks, and issues to senior management and stakeholders in a clear and timely manner. • Team Leadership: o Lead and mentor a team of data engineers, analysts, and project managers. o Ensure effective resource allocation and utilization across projects. o Foster a culture of collaboration, continuous improvement, and innovation within the team. • Technical and Delivery Excellence: o Leverage Data Management Expertise and Experience to guide and lead the technical conversations effectively. Identify and understand technical areas of support needed to the team and work towards resolving them – either by own expertise or networking with internal and external stakeholders to unblock the team o Implement best practices in project management, delivery, and quality assurance. o Drive continuous improvement initiatives to enhance delivery efficiency and client satisfaction. o Stay updated with the latest trends and advancements in Big Data, Cloud, and Analytics technologies. Requirements : • Experience in IT delivery management, particularly in Big Data, Cloud, and Analytics. • Strong knowledge of project management methodologies and tools (e.g., Agile, Scrum, PMP). • Excellent leadership, communication, and stakeholder management skills. • Proven ability to manage large, complex projects with multiple stakeholders. • Strong critical thinking skills and the ability to make decisions under pressure. Academic Qualifications: • Bachelor’s degree in computer science, Information Technology, or a related field. • Relevant certifications in Big Data, Cloud platforms like GCP, Azure, AWS, Snowflake, Databricks, Project Management or similar areas is preferred. Experience : • 15+ years in IT delivery management, with at least 7 years in Big Data, Cloud, and Analytics. Experience should span across ETL, Data Management, Data Visualization and Project Management The ideal candidate will have a strong experience in IT delivery management, excellent leadership skills, and a proven record in managing complex projects from initiation to completion. The ideal candidate should have the right blend of experience in Client Engagement, Project Delivery and Technical Data Management Skills If you have a passion for leading high-impact projects and delivering exceptional results, we encourage you to apply and be a part of our innovative team at Smartavya Analytica Private Limited Show more Show less
Job Title: Cloud DevOps Architect Location: Pune, India Experience: 10 - 15 Years Work Mode: Full-time, Office-based Company : Smartavya Analytica Private Limited Company Overview: Smartavya Analytica is a niche Data and AI company based in Mumbai, established in 2017. We specialize in data-driven innovation, transforming enterprise data into strategic insights. With expertise spanning over 25+ Data Modernization projects and handling large datasets up to 24 PB in a single implementation, we have successfully delivered data and AI projects across multiple industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are specialists in Cloud, Hadoop, Big Data, AI, and Analytics, with a strong focus on Data Modernization for OnPremises, Private, and Public Cloud Platforms. Visit us at: https://smart-analytica.com Job Summary: We are looking for an accomplished Cloud DevOps Architect to design and implement robust DevOps and Infrastructure Automation frameworks across Azure, GCP, or AWS environments. The ideal candidate will have a deep understanding of CI/CD , IaC , VPC Networking , Security , and Automation using Terraform or Ansible . Key Responsibilities: Architect and build end-to-end DevOps pipelines using native cloud services (Azure DevOps, AWS CodePipeline, GCP Cloud Build) and third-party tools (Jenkins, GitLab, etc.). Define and implement foundation setup architecture (Azure, GCP and AWS) as per the recommended best practices. Design and deploy secure VPC architectures , manage networking, security groups, load balancers, and VPN gateways. Implement Infrastructure as Code (IaC) using Terraform or Ansible for scalable and repeatable deployments. Establish CI/CD frameworks integrating with Git, containers, and orchestration tools (e.g., Kubernetes, ECS, AKS, GKE). Define and enforce cloud security best practices including IAM, encryption, secrets management, and compliance standards. Collaborate with application, data, and security teams to optimize infrastructure, release cycles, and system performance. Drive continuous improvement in automation, observability, and incident response practices. Must-Have Skills: 10- 5 years of experience in DevOps, Infrastructure, or Cloud Architecture roles. Deep hands-on expertise in Azure , GCP , or AWS cloud platforms (any one is mandatory, more is a bonus). Strong knowledge of VPC architecture , Cloud Security , IAM , and Networking principles . Expertise in Terraform or Ansible for Infrastructure as Code. Experience building resilient CI/CD pipelines and automating application deployments. Strong troubleshooting skills across networking, compute, storage, and containers. Preferred Certifications: Azure DevOps Engineer Expert / AWS Certified DevOps Engineer Professional / Google Professional DevOps Engineer HashiCorp Certified: Terraform Associate (Preferred for Terraform users) Show more Show less
Job Title: R Analytics Lead Experience: 8-10 years in Analytics (SAS/SPSS/R/Python) with at least the last 4 years in a senior position focusing on R Studio, R Server, and similar Location: Mumbai, India [Full Time Office Hours] Department: Business Analytics Company : Smartavya Analytica Private limited is niche Data and AI company. Based in Pune, we are pioneers in data-driven innovation, transforming enterprise data into strategic insights. Established in 2017, our team has experience in handling large datasets up to 20 PB’s in a single implementation, delivering many successful data and AI projects across major industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are leaders in Big Data, Cloud and Analytics projects with super specialisation in very large Data Platforms. https://smart-analytica.com Empowering Your Digital Transformation with Data Modernization and AI Job Summary: We are seeking a highly skilled R Analytics Lead to oversee our analytics team and drive data-driven decision-making processes. The ideal candidate will have extensive experience in R programming, data analysis, and statistical modelling, and will be responsible for leading analytics projects that provide actionable insights to support business objectives. Responsibilities and Duties: Lead the development and implementation of advanced analytics solutions using R. Manage and mentor a team of data analysts and data scientists. Collaborate with cross-functional teams to identify business needs and translate them into analytical Solutions. Design and execute complex data analyse, including predictive modelling, machine learning and statistical analysis. Develop and maintain data pipelines and ETL processes. Ensure the accuracy and integrity of data and analytical results Academic Qualifications: Bachelor’s or Master’s degree in Statistics, Computer Science, Data Science, or a related field. Skills : Extensive experience with R programming and related libraries (e.g., ggplot2, dplyr, caret). Strong background in statistical modelling, machine learning, and data visualization. Proven experience in leading and managing analytics teams. Excellent problem-solving skills and attention to detail. Strong communication skills, with the ability to present complex data insights to non-technical audiences. Experience with other analytics tools and programming languages (e.g., Python, SQL) is a plus Show more Show less
About The Job Job Title : Cloud DevOps Architect Location : Pune, India Experience : 10 - 15 Years Work Mode : Full-time, Office-based Company Smartavya Analytica Private Limited Company Overview : Smartavya Analytica is a niche Data and AI company based in Mumbai, established in 2017. We specialize in data-driven innovation, transforming enterprise data into strategic insights. With expertise spanning over 25+ Data Modernization projects and handling large datasets up to 24 PB in a single implementation, we have successfully delivered data and AI projects across multiple industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are specialists in Cloud, Hadoop, Big Data, AI, and Analytics, with a strong focus on Data Modernization for On Premises, Private, and Public Cloud Platforms. Job Summary We are looking for an accomplished Cloud DevOps Architect to design and implement robust DevOps and Infrastructure Automation frameworks across Azure, GCP, or AWS environments. The ideal candidate will have a deep understanding of CI/CD, IaC, VPC Networking, Security, and Automation using Terraform or Ansible. Key Responsibilities Architect and build end-to-end DevOps pipelines using native cloud services (Azure DevOps, AWS CodePipeline, GCP Cloud Build) and third-party tools (Jenkins, GitLab, etc.). Define and implement foundation setup architecture (Azure, GCP and AWS) as per the recommended best practices. Design and deploy secure VPC architectures, manage networking, security groups, load balancers, and VPN gateways. Implement Infrastructure as Code (IaC) using Terraform or Ansible for scalable and repeatable deployments. Establish CI/CD frameworks integrating with Git, containers, and orchestration tools (e.g., Kubernetes, ECS, AKS, GKE). Define and enforce cloud security best practices including IAM, encryption, secrets management, and compliance standards. Collaborate with application, data, and security teams to optimize infrastructure, release cycles, and system performance. Drive continuous improvement in automation, observability, and incident response practices. Must-Have Skills 10 to 15 years of experience in DevOps, Infrastructure, or Cloud Architecture roles. Deep hands-on expertise in Azure, GCP, or AWS cloud platforms (any one is mandatory, more is a bonus). Strong knowledge of VPC architecture, Cloud Security, IAM, and Networking principles. Expertise in Terraform or Ansible for Infrastructure as Code. Experience building resilient CI/CD pipelines and automating application deployments. Strong troubleshooting skills across networking, compute, storage, and containers. Preferred Certifications Azure DevOps Engineer Expert / AWS Certified DevOps Engineer Professional / Google Professional DevOps Engineer HashiCorp Certified : Terraform Associate (Preferred for Terraform users) (ref:hirist.tech)
Job Title: Product Engineering Manager – AI-Powered Developer Tools Location : Pune, Kharadi Role Snapshot: You’ll be the “chief builder” who turns a set of working feature prototypes into a cohesive, shippable product. Beyond guiding architecture, you will own the full engineering, testing, build, and release lifecycle —from branch strategy and CI/CD pipelines to demo-ready builds. Expect to roll up your sleeves, dive into code or Docker files when needed, and keep a close pulse on every sprint deliverable until the demo goes live. Key Responsibilities: • Product-engineering leadership – translate the AI-driven IDE vision into actionable roadmaps, sprint goals, and acceptance criteria; focus and prioritise what ships next. • Own the release pipeline end-to-end – branch strategy, CI/CD, automated testing, versioned builds, and demo-day readiness. • Hands-on development – pair-program, review PRs, and jump into critical-path code (TypeScript/Node, Go/Rust, or Python) whenever the team is blocked. • Coach & unblock the team – micro-manage 6-10 engineers until velocity feels effortless: instil clear “definition of done” and code-quality gates. • Embed AI in the workflow – push AI coding tools / IDE to their limits; wire LLM services into new features. • Drive a “ship-fast, iterate-faster” culture – short sprints, clean merges, nightly builds, zero demo-day surprises. • Sweat the details – performance, polish, and great UX in every release. Must-Have Experience: • 8–12 yrs engineering, with 3+ yrs leading product teams for SaaS, IDE plugins, or dev-tool startups. • Expert in Git flow and conflict resolution. • Hands-on with modern CI/CD (GitHub Actions / GitLab CI / Azure Pipelines) for cross-platform desktop or VS Code extension builds. • End-to-end test strategy ownership (unit → integration → Playwright/Cypress). • Strong coding skills in TS/JS plus one systems language (Go, Rust, or C++). • Comfortable running local or hosted LLMs and wiring them into product features. Ideal Candidate Traits: • Cracked builder pre-AI who now wields AI tools without relying on them. • Proven record shipping products users love , not just PoCs. • Momentum over perfection – continuous small wins trump “big-bang” launches. • Energised by ownership , allergic to bureaucracy – rigorous on “definition of done,” flexible on process. • Cares deeply about craft and teammates. Why Smart Analytica: • Green-field canvas : define the DNA of a next-gen AI coding platform. • Hands-on freedom : founders expect you to lead and code. • Tight-knit team : builders who care about craft, speed, and each other. Ready to own the build button? Email your résumé to hiring@smart-analytica.com with subject “Product Engineering Manager”
Job Title : Product Engineer – AI-Powered Developer Tools (On-Prem) Location : Tower 1, World Trade Center, Kharadi, Pune Work Arrangement : Full-time, In-Office Role Snapshot Join our core engineering squad building air-gapped, AI-powered developer tooling that enterprises can run entirely behind their firewalls. You’ll turn ideas and prototypes into polished features, work shoulder-to-shoulder with architects, and ship code that delights developers— the product mainly build to work within customer firewalls, customized LLM models and on-prem systems. Key Responsibilities • Feature Ownership (End-to-End) – design, code, test, and merge user-facing features in our AI-driven IDE extensions and backend services. • Self-Hosted AI Wiring – integrate local LLMs, embeddings, and static-analysis models (e.g., Llama-3, Mistral) for completion, refactor, search, and chat—no external endpoints. • CI/CD & DevOps Contribution – keep pipelines green, Dockerise new services, update Helm/Ansible scripts, and ensure on-prem deploys are one-click. • Quality First – write unit, integration, and Playwright/Cypress tests; monitor performance budgets and chase latency until it feels instant. • Collaborative Engineering – pair-program, review PRs, and share knowledge to unblock teammates; work closely with an 8-10-person dev pod and the Product Architect. Must-Have Experience • 2–5 years building production-grade software (SaaS tools, IDE plugins, or backend services). • Solid in Java, API, microservices, TypeScript/JavaScript plus Python or Go (Rust/C++ welcome). • Comfortable with Git workflows, Docker , and at least one on-prem K8s/OpenShift environment. • Exposure to LLM frameworks (e.g., Ollama, HF Transforme rs) or traditional ML pipelines. • Familiar with IaC (Helm/Ansible/Terraform) and basic Linux ops. You'll be a good fit if you: • Were a cracked builder before the Al wave. • Have Al fully embedded in your workflow but didn't need it to be great. • Know how to use tools like Cursor, Claude Code, Google CLI, Windsurf, Replit, and Lovable — and push them to the limit. • Can build entirely functional prototypes within 1-2 hours using AI tools. • Have built products people use (and love). • Care more about who you work with than what you work on. • Ship fast, iterate faster, and value momentum over perfection. • See coding as craft — performance, polish, and UX matter to you. • Are allergic to bureaucracy and energized by ownership. Why Smart Analytica? • Green-field canvas: help build the next must-have, on-prem AI coding platform. • Hands-on mentorship: work directly with seasoned architects & AI experts. • Tight-knit team : builders who care about craft, speed, and each other. Ready to code the future—behind the firewall? Email your résumé to hiring@smart-analytica.com with subject “ Product Engineer ”
Job Title : Product Architect – AI-Powered Developer Tools (On-Prem) Location : Tower 1, World Trade Center, Kharadi, Pune Work Arrangement: Full-time, In-Office Role Snapshot Be the architect who turns cutting-edge AI research into air-gapped, enterprise-grade developer tooling . You will research emerging AI-IDE innovations, spin up quick-and-dirty prototypes, and drive an 8-10-person team to ship those features at break-neck speed— the product mainly build to work within customer firewalls, customised LLM models and on-prem systems. Key Responsibilities • On-Prem Architectural Ownership – design a fully self-hosted, distributed architecture that runs on VMs, bare-metal, or private Kubernetes/OpenShift clusters—data-plane to UI. • Rapid Innovation Research – scout open-source / offline-capable AI techniques, prototype them in Python/TypeScript/Go, and distil viable features for the roadmap. • Self-Hosted AI/ML Integration – deploy and fine-tune local LLMs (Llama-3, Mistral, Ollama, GPT-J, etc.), embeddings, and code-intel models with sub-second latency—no external calls. • Secure Build & Packaging – craft hardened build pipelines, sign binaries, embed anti-reverse-engineering guards, and own SBOM & vulnerability closure. • DevOps & CI/CD Leadership (Behind the Firewall) – standardise Docker images, private registries, and IaC (Ansible/Terraform/Helm) for on-prem clusters; enforce shift-left security. • Technical Leadership – run design reviews, create ADRs, mentor and unblock an 8-10 dev team, keeping docs & diagrams one sprint ahead. • Performance & Reliability – set SLOs, choose the right caches/queues, and tune until users feel performance is instant. Ideal Candidate Traits • 8–12 years building large, scalable on-prem or hybrid dev-tool products; 3+ years as lead/principal architect. • Fluent in at least two of Python, TypeScript/Node, Go, Rust , Java plus AI/ML frameworks. • Proven track record of shipping self-hosted LLM/ML solutions without external API dependencies. • Deep expertise in distributed systems , secure SDLC, Docker/K8s/OpenShift, and private IaC tooling. • Thrives on turning research spikes into production-ready features—fast. You'll be a good fit if you: • Were a cracked builder before the Al wave. • Have Al fully embedded in your workflow but didn't need it to be great. • Know how to use tools like Cursor, Claude Code, Google CLI, Windsurf, Replit, and Lovable — and push them to the limit. • Can build entirely functional prototypes within 1-2 hours using AI tools. • Have built products people use (and love). • Care more about who you work with than what you work on. • Ship fast, iterate faster, and value momentum over perfection. • See coding as craft — performance, polish, and UX matter to you. • Are allergic to bureaucracy and energized by ownership. Why Smart Analytica? • Green-field canvas: define the DNA of a next-gen, on-prem AI coding platform. • Hands-on freedom : founders expect you to lead and code. • Tight-knit team : builders who care about craft, speed, and each other. Ready to sketch the blueprint and ship it on-prem? Email your résumé to hiring@smart-analytica.com with subject “ Product Architect ".
The Product Engineer AI-Powered Developer Tools (On-Prem) position based in Tower 1, World Trade Center, Kharadi, Pune offers a full-time, in-office work arrangement. As a part of the core engineering squad, you will be involved in building air-gapped, AI-powered developer tooling designed to operate completely behind customer firewalls. Your role will entail transforming ideas and prototypes into refined features, collaborating closely with architects, and delivering code that enhances the experience of developers. The product is specifically tailored to function within customer firewalls, incorporating customized LLM models and on-prem systems. Your key responsibilities will include owning features from end to end, encompassing the design, coding, testing, and integration of user-facing features in AI-driven IDE extensions and backend services. You will also be responsible for self-hosted AI wiring by integrating local LLMs, embeddings, and static-analysis models for various functionalities without relying on external endpoints. Additionally, you will contribute to CI/CD & DevOps activities to maintain efficient pipelines, dockerize new services, update Helm/Ansible scripts, and ensure seamless on-prem deployments. Emphasis will also be placed on prioritizing quality by writing various types of tests, monitoring performance metrics, and optimizing latency for a seamless user experience. To excel in this role, you should have a minimum of 5 years of experience in building production-grade software, with expertise in Java, API, microservices, TypeScript/JavaScript, and proficiency in Python or Go (knowledge of Rust/C++ is a plus). Familiarity with Git workflows, Docker, and at least one on-prem K8s/OpenShift environment is essential. Exposure to LLM frameworks or traditional ML pipelines, as well as experience with IaC tools and basic Linux operations, will also be beneficial. If you were a proficient builder before the AI wave, can effectively incorporate AI into your workflow, and are adept at utilizing various tools like Cursor, Claude Code, Google CLI, Windsurf, Replit, and Lovable, you are likely to be a good fit for this role. You should be capable of rapidly developing functional prototypes using AI tools, have a track record of creating products that users love, and prioritize collaboration and momentum over perfection. Smart Analytica offers a stimulating work environment where you can contribute to building the next-generation on-prem AI coding platform. You will have the opportunity for hands-on mentorship from seasoned architects and AI experts in a close-knit team that values craftsmanship, efficiency, and mutual support. If you are ready to play a pivotal role in shaping the future of AI development behind the firewall, please email your resume to hiring@smart-analytica.com with the subject "Product Engineer.",
Job Summary We are looking for an experienced R Analytics Lead to manage the day-to-day operations of our R-based analytics environment. The role focuses on monitoring the execution of existing R scripts, resolving failures through root cause analysis and data cleaning, supporting business teams with production data requests, and mentoring the R Ops team for better process efficiency and incident handling. --- Key Responsibilities Monitor Production Jobs: Oversee successful execution of scheduled R scripts; monitor failures, investigate issues, and take corrective actions. Root Cause Analysis: Troubleshoot script failures and identify data or logic issues; perform necessary fixes and re-execute the process to ensure output delivery. Data Cleaning: Handle raw or inconsistent production data by applying proper cleaning techniques to ensure smooth script execution. Production Data Requests: Fulfill various production data and reporting requests raised by business stakeholders using R and SQL. Issue Resolution & Team Support: Act as the go-to person for any technical issues in the R Ops team. Guide and support team members in identifying problems and resolving them. Process Improvement: Identify areas to improve existing R code performance, suggest enhancements, and help automate or simplify routine tasks. Collaboration with Development & QA: Support testing, deployment, and monitoring activities for new script developments or changes in the production environment. Knowledge Sharing: Train and mentor team members on R coding standards, production support practices, database usage, and debugging techniques. Required Qualifications 6+ years of experience in analytics, with at least 4 years in a lead or senior operations/support role. Strong hands-on experience in R programming (especially with packages like dplyr, data.table, readr, lubridate). Proficiency in SQL for data extraction, transformation, and analysis. Experience in handling production support, script monitoring, and issue resolution. Demonstrated ability to lead teams, train junior members, and coordinate across departments. Desirable Skills Familiarity with scheduling tools and database connections in a production environment. Ability to document processes, communicate issues clearly, and interact with business users.
Role: R Analytics Support Mode: Onsite (5 days) (Rotational shift 24*7) NO night shift Location: Mumbai Job Summary We are looking for an experienced R Analytics Lead to manage the day-to-day operations of our R-based analytics environment. The role focuses on monitoring the execution of existing R scripts, resolving failures through root cause analysis and data cleaning, supporting business teams with production data requests, and mentoring the R Ops team for better process efficiency and incident handling. Key Responsibilities: Monitor Production Jobs : Oversee successful execution of scheduled R scripts; monitor failures, investigate issues, and take corrective actions. Root Cause Analysis: Troubleshoot script failures and identify data or logic issues; perform necessary fixes and re-execute the process to ensure output delivery. Data Cleaning: Handle raw or inconsistent production data by applying proper cleaning techniques to ensure smooth script execution. Production Data Requests: Fulfill various production data and reporting requests raised by business stakeholders using R and SQL. Issue Resolution & Team Support: Act as the go-to person for any technical issues in the R Ops team. Guide and support team members in identifying problems and resolving them. Process Improvement: Identify areas to improve existing R code performance, suggest enhancements, and help automate or simplify routine tasks. Collaboration with Development & QA : Support testing, deployment, and monitoring activities for new script developments or changes in the production environment. Knowledge Sharing : Train and mentor team members on R coding standards, production support practices, database usage, and debugging techniques. Required Qualifications 6+ years of experience in analytics, with at least 4 years in a lead or senior operations/support role. Strong hands-on experience in R programming (especially with packages like dplyr, data.table, readr, lubridate). Proficiency in SQL for data extraction, transformation, and analysis. Experience in handling production support, script monitoring, and issue resolution. Demonstrated ability to lead teams, train junior members, and coordinate across departments. Desirable Skills Familiarity with scheduling tools and database connections in a production environment. Ability to document processes, communicate issues clearly, and interact with business users.
You are an experienced R Analytics Lead responsible for managing the day-to-day operations in our R-based analytics environment. Your main duties will include monitoring the execution of existing R scripts, resolving failures through root cause analysis and data cleaning, supporting business teams with production data requests, and mentoring the R Ops team for improved process efficiency and incident handling. You will be in charge of overseeing the successful execution of scheduled R scripts, monitoring failures, investigating issues, and taking corrective actions. Troubleshooting script failures, identifying data or logic issues, and performing necessary fixes to ensure smooth output delivery will be a crucial part of your role. Additionally, you will handle raw or inconsistent production data by applying appropriate cleaning techniques for seamless script execution. In this position, you will fulfill various production data and reporting requests raised by business stakeholders using R and SQL. You will also be the go-to person for technical issues within the R Ops team, guiding and supporting team members in issue resolution. Identifying areas for improvement in existing R code performance, suggesting enhancements, and helping automate or simplify routine tasks will be key responsibilities. Collaboration with Development & QA teams to support testing, deployment, and monitoring activities for new script developments or changes in the production environment is essential. Furthermore, you will be responsible for training and mentoring team members on R coding standards, production support practices, database usage, and debugging techniques. The ideal candidate for this role should have at least 6 years of experience in analytics, with a minimum of 4 years in a lead or senior operations/support position. Strong hands-on experience in R programming, proficiency in SQL, and a background in handling production support, script monitoring, and issue resolution are required. Demonstrated leadership abilities, including team management, training, and coordination across departments, are also essential. Desirable skills for this role include familiarity with scheduling tools and database connections in a production environment, as well as the ability to document processes, communicate issues clearly, and interact effectively with business users.,
Role : Hadoop Admin Manager CDP. Years of Experience : 10-15 Yrs. Location : Mumbai (Kurla). Shifts : 24-7 (Rotational Shift). Mode : Onsite. Experience : 10+ yrs of experience in IT, with At least 7+ years of experience with cloud and system administration. At least 5 years of experience with and strong understanding of 'big data' technologies in Hadoop ecosystem - Hive, HDFS, Map/Reduce, Flume, Pig, Cloudera, HBase Sqoop, Spark etc. Empowering Your Digital Transformation with Data Modernization and AI. Job Overview Smartavya Analytica Private Limited is seeking an experienced Hadoop Administrator to manage and support our Hadoop ecosystem. The ideal candidate will have strong expertise in Hadoop cluster administration, excellent troubleshooting skills, and a proven track record of maintaining and optimizing Hadoop environments. Key Responsibilities Install, configure, and manage Hadoop clusters, including HDFS, YARN, Hive, HBase, and other ecosystem components. Monitor and manage Hadoop cluster performance, capacity, and security. Perform routine maintenance tasks such as upgrades, patching, and backups. Implement and maintain data ingestion processes using tools like Sqoop, Flume, and Kafka. Ensure high availability and disaster recovery of Hadoop clusters. Collaborate with development teams to understand requirements and provide appropriate Hadoop solutions. Troubleshoot and resolve issues related to the Hadoop ecosystem. Maintain documentation of Hadoop environment configurations, processes, and procedures. Requirement Experience in Installing, configuring and tuning Hadoop distributions. Hands on experience in Cloudera. Understanding of Hadoop design principals and factors that affect distributed system performance, including hardware and network considerations. Provide Infrastructure Recommendations, Capacity Planning, work load management. Develop utilities to monitor cluster better Ganglia, Nagios etc. Manage large clusters with huge volumes of data. Perform Cluster maintenance tasks. Create and removal of nodes, cluster monitoring and troubleshooting. Manage and review Hadoop log files. Install and implement security for Hadoop clusters. Install Hadoop Updates, patches and version upgrades. Automate the same through scripts. Point of Contact for Vendor escalation. Work with Hortonworks in resolving issues. Should have Conceptual/working knowledge of basic data management concepts like ETL, Ref/Master data, Data quality, RDBMS. Working knowledge of any scripting language like Shell, Python, Perl. Should have experience in Orchestration & Deployment tools. Academic Qualification BE / B.Tech in Computer Science or equivalent along with hands-on experience in dealing with large data sets and distributed computing in data warehousing and business intelligence systems using Hadoop. (ref:hirist.tech)
Role : R Analytics Support. Mode : Onsite (5 days) (Rotational shift 24-7) NO night shift. Location : Mumbai. Job Summary We are looking for an experienced R Analytics Lead to manage the day-to-day operations of our R-based analytics environment. The role focuses on monitoring the execution of existing R scripts, resolving failures through root cause analysis and data cleaning, supporting business teams with production data requests, and mentoring the R Ops team for better process efficiency and incident handling. Key Responsibilities Monitor Production Jobs : Oversee successful execution of scheduled R scripts; monitor failures, investigate issues, and take corrective actions. Root Cause Analysis : Troubleshoot script failures and identify data or logic issues; perform necessary fixes and re-execute the process to ensure output delivery. Data Cleaning : Handle raw or inconsistent production data by applying proper cleaning techniques to ensure smooth script execution. Production Data Requests : Fulfill various production data and reporting requests raised by business stakeholders using R and SQL. Issue Resolution & Team Support : Act as the go-to person for any technical issues in the R Ops team. Guide and support team members in identifying problems and resolving them. Process Improvement : Identify areas to improve existing R code performance, suggest enhancements, and help automate or simplify routine tasks. Collaboration with Development & QA : Support testing, deployment, and monitoring activities for new script developments or changes in the production environment. Knowledge Sharing : Train and mentor team members on R coding standards, production support practices, database usage, and debugging techniques. Required Qualifications 6+ years of experience in analytics, with at least 4 years in a lead or senior operations/support role. Strong hands-on experience in R programming (especially with packages like dplyr, data.table, readr, lubridate). Proficiency in SQL for data extraction, transformation, and analysis. Experience in handling production support, script monitoring, and issue resolution. Demonstrated ability to lead teams, train junior members, and coordinate across departments. Desirable Skills Familiarity with scheduling tools and database connections in a production environment. Ability to document processes, communicate issues clearly, and interact with business users. (ref:hirist.tech)
Role : Informatica Lead. Mode : Onsite 5 days(Rotational Shift 24-7). Location : Mumbai (Kurla). Required Skills 6-10 years of experience in ETL development and data integration, with at least 3 years in a lead role. Proven experience with Informatica PowerCenter, Informatica Cloud Data Integration, and large-scale ETL implementations. Experience in integrating data from various sources such as databases, flat files, and APIs. Preferred Skills Strong expertise in Informatica PowerCenter, Informatica Cloud, and ETL frameworks. Proficiency in SQL, PL/SQL, UNIX shell scripting and performance optimization techniques. Knowledge of cloud platforms like AWS, Azure, or Google Cloud. Familiarity with big data tools such as Hive, Spark, or Snowflake is a plus. Strong understanding of data modeling concepts and relational database systems. Responsibilities ETL Development and Maintenance : Lead the design, development, and maintenance of ETL workflows and mappings using Informatica PowerCenter and Cloud Data Integration. Ensure the reliability, scalability, and performance of ETL solutions to meet business requirements. Optimize ETL processes for data integration, transformation, and loading into data warehouses and other target systems. Solution Architecture And Implementation Collaborate with architects and business stakeholders to define ETL solutions and data integration strategies. Develop and implement best practices for ETL design and development. Ensure seamless integration with on-premises and cloud-based data platforms. Data Governance And Quality Establish and enforce data quality standards and validation processes. Implement data governance and compliance policies to ensure data integrity and security. Perform root cause analysis and resolve data issues proactively. Team Leadership Manage, mentor, and provide technical guidance to a team of ETL developers. Delegate tasks effectively and ensure timely delivery of projects and milestones. Conduct regular code reviews and performance evaluations for team members. Automation And Optimization Develop scripts and frameworks to automate repetitive ETL tasks. Implement performance tuning for ETL pipelines and database queries. Explore opportunities to improve efficiency and streamline workflows. Collaboration And Stakeholder Engagement Work closely with business analysts, data scientists, and application developers to understand data requirements and deliver solutions. Communicate project updates, challenges, and solutions to stakeholders effectively. Act as the primary point of contact for Informatica-related projects and initiatives. (ref:hirist.tech)
We are looking for a talented R Analytics professional to join our analytics team. You should have a strong background in data analysis, statistical modeling, and proficiency in the R programming language. Your main responsibilities will include analyzing complex datasets, providing insights, and developing statistical models to support business decisions. You will be expected to utilize R programming to analyze large and complex datasets, perform data cleaning, transformation, and analysis. Additionally, you will be responsible for developing and implementing statistical models such as regression, time series, and classification to provide actionable insights. Conducting exploratory data analysis (EDA) to identify trends, patterns, and anomalies is also a key part of the role. Furthermore, you will need to visualize data through plots, charts, and dashboards to effectively communicate results to stakeholders. Collaboration with cross-functional teams to define business problems and develop analytical solutions is essential. Building and maintaining R scripts and automation workflows for repetitive tasks and analysis is also part of the job. You should stay updated with the latest developments in R packages and data science techniques. Presenting findings and insights to stakeholders through clear, concise reports and presentations is crucial. Providing technical support and guidance to data analysts and scientists on R-related issues, troubleshooting and resolving R code errors and performance issues, developing and maintaining R packages and scripts to support data analysis and reporting, and collaborating with data analysts and scientists to design and implement data visualizations and reports are also part of the responsibilities. Qualifications: - Bachelor's/Master's degree in Statistics, Mathematics, Data Science, Computer Science, or a related field. - Minimum 3-5 years of experience in a senior role specifically focusing on R Language, R Studio, and SQL. - Strong knowledge of statistical techniques (regression, clustering, hypothesis testing, etc.). - Experience with data visualization tools like ggplot2, shiny, or plotly. - Familiarity with SQL and database management systems. - Knowledge of machine learning algorithms and their implementation in R. - Ability to interpret complex data and communicate insights clearly to non-technical stakeholders. - Strong problem-solving skills and attention to detail. - Familiarity with version control tools like Git is a plus.,
Key Responsibilities Product-engineering leadership translate the AI-driven IDE vision into actionable roadmaps, sprint goals, and acceptance criteria; focus and prioritise what ships next. Own the release pipeline end-to-end branch strategy, CI/CD, automated testing, versioned builds, and demo-day readiness. Hands-on development pair-program, review PRs, and jump into critical-path code (TypeScript/Node, Go/Rust, or Python) whenever the team is blocked. Coach & unblock the team micro-manage 6-10 engineers until velocity feels effortless; in stil clear definition of done and code-quality gates. Embed AI in the workflow push AI coding tools / IDE to their limits; wire LLM services into new features. Drive a ship-fast, iterate-faster culture short sprints, clean merges, nightly builds, zero demo day surprises. Sweat the details performance, polish, and great UX in every release. Must-Have Experience 8 to 12 yrs engineering, with 3+ yrs leading product teams for SaaS, IDE plugins, or dev-tool startups. Expert in Git flow and conflict resolution. Hands-on with modern CI/CD (GitHub Actions / GitLab CI / Azure Pipelines) for cross platform desktop or VS Code extension builds. End-to-end test strategy ownership (unit ? integration ? Playwright/Cypress). Strong coding skills in TS/JS plus one systems language (Go, Rust, or C++). Comfortable running local or hosted LLMs and wiring them into product features. You'll Be a Good Fit If You Were a cracked builder before the Al wave. Have Al fully embedded in your workflow but didn't need it to be great. Know how to use tools like Cursor, Claude Code, Google CLI, Windsurf, Replit, and Lovable and push them to the limit. Can build entirely functional prototypes within 1-2 hours using AI tools. Have built products people use (and love). Care more about who you work with than what you work on. Ship fast, iterate faster, and value momentum over perfection. See coding as craft performance, polish, and UX matter to you. Are allergic to bureaucracy and energized by ownership (ref:hirist.tech)
Experience : 10 to 15 years in DevOps, Infrastructure, or Cloud Architecture Location : Pune, India Key Responsibilities DevOps Pipeline Development : Architect and build end-to-end DevOps pipelines using native cloud services (Azure DevOps, AWS CodePipeline, GCP Cloud Build) and third-party tools (Jenkins, GitLab, etc.) Cloud Infrastructure Design : Define and implement foundational architecture setups across Azure, GCP, and AWS, adhering to best practices. Network & Security Management : Design and deploy secure VPC architectures, manage networking components, security groups, load balancers, and VPN gateways. Infrastructure as Code (IaC) : Implement scalable and repeatable deployments using Terraform or Ansible. CI/CD Frameworks : Establish CI/CD frameworks integrating with Git, containers, and orchestration tools (e.g., Kubernetes, ECS, AKS, GKE). Security Compliance : Define and enforce cloud security best practices, including IAM, encryption, secrets management, and compliance standards. Cross-functional Collaboration : Collaborate with application, data, and security teams to optimize infrastructure, release cycles, and system performance. Continuous Improvement : Drive continuous improvement in automation, observability, and incident response practices. What You Bring Experience : 10 to 15 years in DevOps, Infrastructure, or Cloud Architecture roles. Cloud Expertise : Deep hands-on expertise in Azure, GCP, or AWS cloud platforms (proficiency in at least one is mandatory; experience with multiple is a bonus). Networking & Security : Strong knowledge of VPC architecture, Cloud Security, IAM, and Networking principles. IaC Proficiency : Expertise in Terraform or Ansible for Infrastructure as Code. CI/CD Automation : Experience building resilient CI/CD pipelines and automating application deployments. Troubleshooting Skills : Strong troubleshooting skills across networking, compute, storage, and containers. Academic Qualifications : Bachelors Degree in Computer Science, Information Technology, or a related field. Preferred Certifications Azure DevOps Engineer Expert/AWS Certified DevOps Engineer Professional /Google Professional DevOps Engineer HashiCorp Certified : Terraform Associate (preferred for Terraform users) (ref:hirist.tech)