Home
Jobs

8431 Tuning Jobs - Page 16

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: VP-Digital Expert Support Lead Experience : 15 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 15+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

ONLY KOLKATA BASED CANDIDATES PREFERRED Softweb Technologies Private Ltd is hiring Oracle Database Administrators !!! A minimum of 5+ years working on Oracle Database with Linus/Unix is desired. To know us better you may log into our website www.softweb.co.in Key Responsibilities of this position: Database Administration: Installation, configuration, patching, and upgrading of Oracle databases on Linux/Unix systems. Performance Tuning: Monitoring database performance, identifying bottlenecks, and implementing solutions to optimize database operations. Backup and Recovery: Implementing and managing backup and recovery procedures to ensure data integrity and availability. Security: Implementing and managing database security measures, including access control, user management, and data encryption. Troubleshooting: Diagnosing and resolving database-related issues, including performance problems, connectivity issues, and data corruption. Scripting : Developing and maintaining shell scripts for automation of database tasks, such as backups, monitoring, and reporting. Collaboration: Working with other IT teams, such as system administrators, network engineers, and application developers, to ensure smooth database operations. Documentation: Creating and maintaining technical documentation related to database configurations, procedures, and troubleshooting steps. Staying Current: Keeping up-to-date with the latest Oracle database technologies, features, and best practices. Required Skills and Qualifications: Oracle Database Expertise: Strong knowledge of Oracle database architecture, installation, configuration, performance tuning, and troubleshooting. Linux/Unix Proficiency: Solid understanding of Linux/Unix operating systems, including command-line navigation, system administration, and scripting. SQL and PL/SQL: Strong knowledge of SQL and PL/SQL for database querying, data manipulation, and stored procedure development. Backup and Recovery: Experience with Oracle's RMAN (Recovery Manager) and other backup and recovery tools. Scripting: Proficiency in shell scripting (e.g., Bash, Korn) for automation and task management. Communication: Excellent written and verbal communication skills for interacting with technical and non-technical stakeholders. Problem-Solving: Strong analytical and problem-solving skills to diagnose and resolve complex database issues. Experience with Oracle Enterprise Manager (OEM): Familiarity with Oracle's management tool for monitoring and managing databases. Experience with RAC (Real Application Clusters) and Data Guard: Knowledge of high availability and disaster recovery solutions. Experience with cloud platforms (AWS, Azure, GCP): For organizations using cloud-based Oracle databases, experience with cloud platforms is beneficial. Compensation & Other perks as per our company standards Candidates who are willing to apply may directly contact me at 8910705575 or mail me at ritu.b@softweb.co.in Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Data Processing & Embeddings: Work with text, image, audio, and video data to create embeddings and preprocess inputs for AI models. Prompt Engineering & Optimization: Experiment with prompt engineering techniques to improve AI responses and outputs. API Integration: Utilize OpenAI, Azure OpenAI, Hugging Face, and other APIs to integrate AI models into applications. Model Development & Fine-Tuning: Assist in training, fine-tuning, and deploying Generative AI models (GPT, Llama, Stable Diffusion, Claude etc.). AI Workflows & Pipelines: Contribute to building and optimizing AI workflows using Python, TensorFlow, PyTorch, Langchain or other frameworks. Cloud & Deployment: Deploy AI solutions on cloud platforms like Azure, AWS, or Google Cloud, leveraging serverless architectures and containerization (Docker, Kubernetes). AI Application Development: Collaborate with developers to build AI-powered applications using frameworks like Streamlit, Flask, or FastAPI. Experimentation & Research: Stay updated on the latest advancements in Generative AI and explore new use cases. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Java Spring Boot Developer Company: Gravity Engineering Services Pvt. Ltd. (GES) Location: Bhubaneswar (Odisha), Raipur (Chhattisgarh), Patna (Bihar) Gravity - Ease of Working - Company Policy Position: Full Time Experience – 5+ years Email - kauser.fathima@gravityer.com Ph - 9916141516 About Gravity: Candidate Gravity Deck Gravity PPT - June 2024 Gravity Engineering Services is a Digital Transformation and Product Engineering company based in USA, Europe and India, through cutting-edge IT solutions. Our diverse portfolio includes Generative AI, Commerce Technologies, Cloud management, Business Analytics and Marketing technologies. We are on a mission for Building experiences and influencing change through delivering digital consulting services that drive innovation, efficiency, and growth for businesses globally, with a vision to be the world's most valued technology company, driving innovation, and making a positive impact on the world. Our goal is to achieve unicorn status (valuation of $1 billion) by 2030. Job Description: - Lead Ecommerce Solution Design and Development: Spearhead the design and development of scalable, secure, and high-performance solutions for our ecommerce platform using Java Spring Boot Collaborate with Cross-Functional Teams: Work closely with product managers, UI/UX designers, and quality assurance teams to gather requirements and deliver exceptional ecommerce solutions. Architectural Oversight: Provide architectural oversight for ecommerce projects, ensuring they are scalable, maintainable, and aligned with industry best practices. Technical Leadership: Lead and mentor a team of Java developers, fostering a collaborative and innovative environment. Provide technical guidance and support to team members. Code Review and Quality Assurance: Conduct regular code reviews to maintain code quality, ensuring adherence to Java Spring Boot coding standards and best practices. Implement and promote quality assurance processes within the development lifecycle. Integration of Ecommerce Solutions: Oversee the seamless integration of ecommerce solutions with other business systems, ensuring a cohesive and efficient data flow. Payment Gateway Integration: Collaborate on the integration of payment gateways and other essential ecommerce functionalities. Stay Informed on Ecommerce Technologies: Stay abreast of the latest developments in ecommerce technologies, incorporating new features and improvements based on emerging trends Engage with clients to understand their ecommerce requirements, provide technical insights, and ensure the successful implementation of solutions. Desired Skills Bachelor’s degree in Computer Science, Information Technology, or a related field. 2+ years of software development experience, with strong focus on Java Spring Boot and e-commerce platforms. Hands-on experience with Java 8 or above, with deep understanding of core Java concepts and modern Java features. Proficient in Spring ecosystem including Spring Boot, Spring MVC, Spring Data JPA, Spring Security, Spring AOP, and Spring Cloud (Config, Discovery, etc.). Strong understanding of microservices architecture, including REST API design and inter-service communication using REST, Kafka, or RabbitMQ. Practical experience in containerization using Docker and orchestration with Kubernetes (mandatory). Experience integrating payment gateways, order management, and inventory systems within e-commerce platforms. Hands-on with relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Redis) – at least one from each category. Familiarity with in-memory caching solutions like Redis or Hazel cast. Good understanding of database performance tuning techniques including indexing and query optimization. Solid grasp of data structures and algorithms, with the ability to apply them in solving real-world problems. Excellent problem-solving and debugging skills, with the ability to work on complex technical challenges. Skills: mongodb,algorithms,mysql,core java,spring boot,java,kafka,kubernetes,spring,spring data jpa,redis,e-commerce platforms,rabbitmq,data structures,docker,spring security,spring mvc,postgresql,rest api design,microservices architecture,spring cloud,java 8 or above,boot,ecommerce,spring aop Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Sony Research India is seeking a dynamic and motivated Speech Recognition Intern to join our innovative research team. As an intern, you will work on real-world problems in automatic speech recognition (ASR), focusing on improving noise robustness and reducing hallucinations in transcription outputs. You'll gain hands-on experience with state-of-the-art tools and datasets, and contribute to impactful projects alongside experienced researchers and engineers. Key Responsibilities: Explore and develop techniques to enhance ASR robustness under noisy, low-resource, and domain-shifted conditions. Investigate hallucination phenomena in end-to-end ASR models (e.g., Whisper, Wav2Vec2, etc.) and propose mitigation strategies. Conduct experiments using large-scale speech datasets and evaluate ASR performance across varying noise levels and linguistic diversity. Contribute to publications, technical reports, or open-source tools as outcomes of the research. Work Location: Remote Duration of the paid Internship: This paid internship will be for a period of 6 months starting June first week of 2025. 9:00 to 18:00 (Monday to Friday). Qualification: Currently pursuing/completed Masters in (Research) or Ph.D. in deep learning/machine learning with hands-on experience on Transformer models with an applications audio/speech. Must Have Skills: Strong programming skills in Python, and familiarity with PyTorch or TensorFlow. Experience with speech processing libraries (e.g., Torchaudio, ESPnet, Hugging Face Transformers). Prior experience with ASR models like Wav2Vec2, Whisper, or RNN-T is a plus. Ability to read and implement academic papers. Strong foundation in machine learning and signal processing. Good to have skills: Familiarity with prompt tuning, contrastive learning, or multi-modal architectures. Experience with evaluating hallucinations or generating synthetic speech/audio perturbations. Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary As a DevOps Engineer, you will play a pivotal role in designing, implementing, and maintaining our infrastructure and deployment processes. You will collaborate closely with our development, operations, and security teams to ensure seamless integration of code releases, infrastructure automation, and continuous improvement of our DevOps practices. This role places a strong emphasis on infrastructure as code with Terraform, including module design, remote state management, policy enforcement, and CI/CD integration. You will manage authentication via Auth0, maintain secure network and identity configurations using AWS IAM and Security Groups, and oversee the lifecycle and upgrade management of AWS RDS and MSK clusters. Additional responsibilities include managing vulnerability remediation, containerized deployments via Docker, and orchestrating production workloads using AWS ECS and Fargate. What you will do Design, build, and maintain scalable, reliable, and secure infrastructure solutions on cloud platforms such as AWS, Azure, or GCP. Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for efficient and automated software delivery. Develop and maintain infrastructure as code (IaC) — with a primary focus on Terraform — including building reusable, modular, and parameterized modules for scalable infrastructure. Securely manage Terraform state using remote backends (e.g., S3 with DynamoDB locks) and establish best practices for drift detection and resolution. Integrate Terraform into CI/CD pipelines with automated plan, apply, and policy-check gating Conduct testing and validation of Terraform code using tools such as Terratest, Checkov, or equivalent frameworks. Design and manage network infrastructure, including VPCs, subnets, routing, NAT gateways, and load balancers. Configure and manage AWS IAM roles, policies, and Security Groups to enforce least-privilege access control and secure application environments. Administer and maintain Auth0 for user authentication and authorization, including rule scripting, tenant settings, and integration with identity providers. Build and manage containerized applications using Docker, deployed through AWS ECS and Fargate for scalable and cost-effective orchestration. Implement vulnerability management workflows, including image scanning, patching, dependency management, and CI-integrated security controls. Manage RDS and MSK infrastructure, including lifecycle and version upgrades, high availability setup, and performance tuning. Monitor system health, performance, and capacity using tools like Prometheus, ELK, or Splunk; proactively resolve bottlenecks and incidents. Collaborate with development and security teams to resolve infrastructure issues, streamline delivery, and uphold compliance. What you will have Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. 6+ years in DevOps or similar role, with strong experience in infrastructure architecture and automation. Advanced proficiency in Terraform, including module creation, backend management, workspaces, and integration with version control and CI/CD. Experience with remote state management using S3 and DynamoDB, and implementing Terraform policy-as-code with OPA/Sentinel. Familiarity with Terraform testing/validation tools such as Terratest, InSpec, or Checkov. Strong background in cloud networking, VPC design, DNS, and ingress/egress control. Proficient with AWS IAM, Security Groups, EC2, RDS, S3, Lambda, MSK, and ECS/Fargate. Hands-on experience with Auth0 or equivalent identity management platforms. Proficient in container technologies like Docker, with production deployments via ECS/Fargate. Solid experience in vulnerability and compliance management across the infrastructure lifecycle. Skilled in scripting (Python, Bash, PowerShell) for automation and tooling development. Experience in monitoring/logging using Prometheus, ELK stack, Grafana, or Splunk. Excellent troubleshooting skills in cloud-native and distributed systems. Effective communicator and cross-functional collaborator in Agile/Scrum environments. Nice to have Terraform (Intermediate) • AWS (IAM, Security Groups, RDS, MSK, ECS/Fargate, Cloudwatch) • Docker • CI/CD (GitLab, Jenkins) • Auth0 • Python/Bash Benefits Generous time off policies Top shelf benefits Education, wellness and lifestyle support Show more Show less

Posted 1 day ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Company Description Globify is a software solutions company based in Trivandrum, with a global presence in over 10 countries. Since 2016, Globify has completed 300+ projects, prioritizing client satisfaction, technological advancement, and operational excellence. The team's expertise and passion have solidified Globify as a trendsetter in the industry. Role Description This is a full-time on-site role for a Wordpress Developer at Globify in Trivandrum. The Wordpress Developer will be responsible for back-end and front-end web development, responsive web design, web design, and web development tasks. Required Skills & Experience: • 1+ years of professional WordPress development experience. • Deep understanding of WordPress theme architecture and the template hierarchy. • Strong experience converting Figma (or Sketch/XD) designs into custom themes. • Proficient with PHP, MySQL, HTML5, CSS3, SCSS, JavaScript. • Experience with Gutenberg, Elementor, ACF Pro, and custom post types. • Familiarity with Webpack, Gulp, or similar build tools. • Strong Git workflow knowledge (feature branches, pull requests). • Understanding of SEO best practices, performance tuning, and security. Bonus Points: • Experience with WooCommerce. • Knowledge of headless WordPress (e.g., using REST API or GraphQL). • Familiarity with CI/CD pipelines and deployment automation. Job Type: Full-time Benefits: • Leave encashment • Paid time off Schedule: • Monday to Friday Ability to commute/relocate: • Thiruvananthapuram, Kerala: Experience: • WordPress: 1+ years (Required) Language: • Malayalam (Required) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

Experience in MYSQL 5.5 and above versions Hands on experience on Stored Procedure development, MySQL tuning and performance optimization Hands on experience on various MySQL engine types, include InnoDB, MyISAM, MEMORY Hands on experience on Query optimization, Index optimization and Data dictionary Experience with setup, configuration, maintenance, trouble-shooting MySQL replication environments Experience with High Availability MySQL environments. Experience on Data backup and RAID 10/5 Experience with SQLite design, configuration, tuning and consolidation Experience with performance tuning MYSQL databases for a variety of environments Linux, including experience writing Linux scripts Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Req ID: 299670 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor’s in computer sciences or similar. Masters preferred. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. Show more Show less

Posted 1 day ago

Apply

2.0 - 3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Position Type: PartTime / Contract / Remote Location: Open Experience Required: 2-3 years (or equivalent project experience) Key Responsibilities: Work with Large Language Models (LLMs) , both open-source (like LLaMA, Mistral, GPT-NeoX) and API-based (like OpenAI, Anthropic) Develop and manage multi-agent system architectures for AI-driven applications Conduct comparative evaluation of LLMs based on quality, speed, cost, and reliability Design, test, and optimize prompts , context management strategies, and resolve common issues like hallucinations and irrelevant outputs Understand the basics of model fine-tuning and customizing pre-trained models for domain-specific use-cases Build and integrate backend systems using Node.js , RESTful/GraphQL APIs, and database management (SQL / NoSQL) Deploy applications on cloud platforms like Firebase, AWS, or Azure , and manage resources effectively Implement and manage agentic AI systems , A2A workflows (agent-to-agent) , and RAG (Retrieval-Augmented Generation) pipelines Handle hosting, scaling, and deployment of websites/web apps and maintain performance during high traffic loads Optimize infrastructure for cost-effectiveness and high availability Work with frameworks like LangChain , AutoGen , or equivalent LLM orchestration tools Lead or collaborate on AI-powered product development from concept to deployment Balance between rapid prototyping and building production-grade, reliable AI applications Preferred Skills: Strong understanding of LLM evaluation frameworks and metrics Familiarity with LangChain, AutoGen, Haystack , or similar AI agent management libraries Working knowledge of AI deployment best practices Basic knowledge of Docker, Kubernetes , and scalable hosting setups Experience in managing cross-functional teams or AI development interns is a plus Bonus Advantage: Prior experience working on AI-powered SaaS platforms Contribution to open-source AI projects or AI hackathons Familiarity with data privacy, security compliance, and cost management in AI applications Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Key Responsibilities: Design, develop, test, and deploy high-quality applications using .NET Framework and .NET Core. Develop RESTful APIs, backend services, and front-end components as needed. Participate in architecture decisions, code reviews, and performance tuning. Work closely with stakeholders to understand business requirements and translate them into technical solutions. Mentor and guide junior developers, ensuring adherence to coding standards and practices. Troubleshoot, debug, and resolve complex technical issues. Ensure the application’s security, performance, and scalability. Required Skills: 6+ years of hands-on experience in .NET development (C#, ASP.NET MVC, .NET Core). Strong knowledge of object-oriented programming, design patterns, and software architecture. Experience with front-end technologies like HTML5, CSS3, JavaScript, jQuery, and preferably Angular or React. Proficiency in SQL Server, including stored procedures, indexing, and performance tuning. Experience with Entity Framework, LINQ, and ADO.NET. Solid understanding of RESTful APIs, Web Services, and microservice architecture. Experience using version control systems (Git, TFS). Familiarity with Agile methodologies (Scrum/Kanban). Preferred Skills: Knowledge of cloud platforms like Microsoft Azure or AWS. Experience with CI/CD pipelines and DevOps practices. Exposure to unit testing frameworks like MSTest, NUnit, or xUnit. Familiarity with containerization (Docker/Kubernetes) is a plus. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About the Role At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution. Our flagship platform is currently under development. As a Backend Engineer , you will play a foundational role in designing and building the core trading engine and research infrastructure from the ground up. Your work will focus on developing performance-critical components that power backtesting, real-time strategy execution, and seamless integration with brokers and data providers. You’ll be responsible for bridging core engine logic with Python-based strategy interfaces, supporting a modular system architecture for isolated and scalable strategy execution, and building robust abstractions for data handling and API interactions. This role is central to delivering the reliability, flexibility, and performance that our users will rely on in fast-moving financial markets. Core Tasks Build and maintain the trading engine core for execution, backtesting, and event logging. Develop isolated strategy execution runners to support multi-user, multi-strategy environments. Implement abstraction layers for brokers and market data feeds to offer a unified API experience. Bridge the core engine language with Python strategies using gRPC, ZeroMQ, or similar interop technologies. Implement logic to parse and execute JSON-based strategy DSL from the strategy builder. Design compute-optimized components for multi-asset workflows and scalable backtesting. Capture real-time state, performance metrics, and slippage for both live and simulated runs. Collaborate with infrastructure engineers to support high-availability deployments. Top Technical Competencies Proficiency in distributed systems, concurrency, and system design. Strong backend/server-side development skills using C++ , Rust , C# , Erlang , or Python . Deep understanding of data structures and algorithms with a focus on low-latency performance. Experience with event-driven and messaging-based architectures (e.g., ZeroMQ , Redis Streams ). Familiarity with Linux-based environments and system-level performance tuning. Bonus Competencies Understanding of financial markets, asset classes, and algorithmic trading strategies. 0–3 years of prior DevOps experience (containerization, build pipelines, Infrastructure as Code). Hands-on experience with backtesting frameworks or financial market simulators. Experience with sandboxed execution environments or paper trading platforms. Advanced knowledge of multithreading, memory optimization, or compiler construction. Educational background from Tier-I or Tier-II institutions in India with strong computer science fundamentals, a passion for scalable system design, and a drive to build cutting-edge fintech infrastructure. What We Offer Opportunity to shape the backend architecture of a next-gen fintech startup. A collaborative, technically driven culture. Competitive compensation with performance-based bonuses. Exposure to financial modeling, trading infrastructure, and real-time applications. Collaboration with a world-class team from Pomona , UCLA , Harvey Mudd , and Claremont McKenna . Ideal Candidate You’re a backend-first thinker who’s obsessed with reliability, latency, and architectural flexibility. You enjoy building scalable systems that transform complex strategy logic into high-performance, real-time trading actions. You think in microseconds, architect for fault tolerance, and build APIs designed for developer extensibility. Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Mohali district, India

On-site

Linkedin logo

SourceFuse Technologies hiring Technical Architect with 10+ years of experience. Overview: You will work on a high-scale production application that handles thousands of transactions daily, with the goal of re-engineering and evolving it to support millions of transactions. In addition to architecting robust backend systems, you will play a key role in enabling intelligent, functional, and aesthetic user experiences — leveraging the latest in AI and automation to boost team productivity and product intelligence. This role also includes the exploration and integration of Generative AI technologies and AI-enhanced developer tooling to streamline development, testing, and delivery cycles. Key Responsibility: Collaborate closely with development and delivery teams to enable scalable, high-performance software solutions. Participate in client meetings as a technical expert to gather business and technical requirements and translate them into actionable solutions. Remain technology-agnostic, with a strong awareness of emerging tools, including AI-based solutions. Architect and present technical solutions aligned with business objectives and innovation goals. Lead R&D initiatives around Generative AI, AI-driven automation , and productivity tooling to enhance engineering efficiency and code quality. Create and maintain technical documentation, architecture diagrams, and AI-assisted design drafts. Work cross-functionally with clients and project teams to capture requirements and devise intelligent, future-ready solutions. Identify opportunities to integrate AI-based code generation, automated testing, and AI-enhanced observability into the SDLC. Mentor teams on the adoption of GenAI tools (e.g., GitHub Copilot, ChatGPT, Amazon CodeWhisperer) and establish governance around their responsible use. Drive innovation in architecture by exploring AI/ML APIs, LLM-based recommendation systems, and intelligent decision engines. Education: More than formal degrees, we're looking for someone who has the skills, curiosity, and initiative to deliver the responsibilities mentioned above — including the capacity to evaluate and leverage AI-driven technologies for real-world challenges. Skills & Abilities: Deep understanding of AWS and other public cloud platforms. Expert-level experience in full stack development using Node.js and Angular. Proficiency in architecting solutions from the ground up, from concept to production. Strong advocate of Test-Driven Development with hands-on implementation. Practical knowledge of microservices architecture , patterns, and scalability principles. Awareness and practical usage of observability tools and distributed tracing . Familiarity with OpenTelemetry and related observability frameworks. Experience with cloud-native development , containerization (e.g., Docker), and deployment on Kubernetes. Working knowledge of Infrastructure as Code (IaC) tools like Terraform and Helm. Exposure to open-source frameworks and cloud-native stacks. Knowledge of LoopBack 4 is a plus. DBonus: Experience using or integrating GenAI tools for tasks such as: -Code scaffolding and refactoring -Automated documentation generation -Unit test cases generation -Intelligent API design -Semantic search and natural language processing -Prompt engineering and fine-tuning of LLMs Experience: 10+ years of relevant experience in software architecture and engineering. At least 1–2 years of practical exposure to AI-enhanced development workflows or GenAI technologies is highly desirable. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Anupgarh, Rajasthan, India

On-site

Linkedin logo

34070BR Hyderabad Job Description JDE Technical Consultant offshore We are seeking a skilled JD Edwards (JDE) Technical Developer/Analyst to join our ERP team. The ideal candidate will be responsible for developing, customizing, and supporting JD Edwards EnterpriseOne (E1) applications, integrations, and reports to meet business needs. This role requires hands-on experience in JDE toolsets, UBE/BSFN, Orchestrator, and CNC knowledge for debugging and performance tuning. Key Responsibilities Design, develop, and maintain JDE E1 applications and custom modules using JDE toolsets (FDA, RDA, TDA, ERW, etc.). Build and support JDE Orchestrator integrations and APIs with external systems. Develop UBEs, business functions (BSFNs), and custom reports. Troubleshoot and resolve technical issues across JDE modules (Finance). Perform system analysis, design reviews, and participate in solution design. Collaborate with business analysts and end users to gather requirements and provide technical solutions. Work closely with CNC/Tech team for deployment, security, and performance optimization. Provide production support and ensure system stability and performance. Assist in JDE upgrades, ESUs, and tools releases as needed. Document technical designs, test cases, and deployment plans. Qualifications B.Tech Range of Year Experience-Min Year 10 Range of Year Experience-Max Year 14 Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Sprinklr is a leading enterprise software company for all customer-facing functions. With advanced AI, Sprinklr's unified customer experience management (Unified-CXM) platform helps companies deliver human experiences to every customer, every time, across any modern channel. Headquartered in New York City with employees around the world, Sprinklr works with more than 1,000 of the world’s most valuable enterprises — global brands like Microsoft, P&G, Samsung and more than 50% of the Fortune 100. Learn more about our culture and how we make our employees happier through The Sprinklr Way. Job Description Role : IT Helpdesk Analyst - Bangalore, India What will you do: Local support of the Bangalore office on site 5 days a week Global support of all Sprinklr users. Tracking & documenting of all support activities Supporting primarily macOS, Windows, AV and all common office software and applications such as Microsoft, Adobe and cloud SaaS products. Perform laptop setup and inductions for users, hardware troubleshooting and repair. General office IT such as conference room, telephone, office Network and Printer setups etc. Implementation of applications and software upgrades, as well as performance troubleshooting and tuning for users. Communicate and document troubleshooting techniques and best practices. Perform endpoint management deployment and anti-virus security. Work with IT Management to constantly monitor and improve delivery of IT systems and support. Proactively understand, analyse and research new technical problems when needed. Supporting high level events and proactively monitoring of meeting rooms Support new hire onboarding by preparing and provisioning devices, creating user accounts, and guiding users through the IT setup process. Assist with employee offboarding, ensuring secure return of assets and revocation of system access in coordination with HR and Security. Collaborate with cross-regional IT teams to troubleshoot and resolve complex or escalated issues. Maintain documentation for SOPs, knowledge base articles, and user guides to promote self-service and knowledge sharing. Support VIP users and executives, ensuring high-priority response and proactive care during events or travel AV & Event Support Responsibilities Setting up and managing AV equipment for executive meetings and high-profile events, ensuring smooth operations. Configuring and troubleshooting displays, video, and audio connections, including mixers, computers, and peripherals. About You 4+ years' experience within IT or B.S. degree Self-motivation and the ability to work with minimum supervision Excellent written and verbal communication skills and meticulous attention to detail. Experience working with high-level executives Good understanding of Microsoft 365, Computer Network Experience with Jamf Pro and enterprise Mac management concepts Experience with end user customer support, possess strong technical knowledge of macOS; Windows, and other Microsoft products Ability to use customer-service oriented techniques to determine and resolve problems and respond competently with the appropriate sense of urgency to user requests. Work both independently and as part of a team with professionals at all levels Quick learner, proactive individual with the ability to work in a dynamic, fast changing environment. Ability to prioritise tasks and work on multiple assignments. Essential technologies: Conferencing & collaboration tools: Microsoft Teams, SharePoint, OneDrive. AV & Event Technologies: Video conferencing systems, audio mixers, display management, and camera switching. Why You'll Love Sprinklr: We're committed to creating a culture where you feel like you belong, are happier today than you were yesterday, and your contributions matter. At Sprinklr, we passionately, genuinely care. For full-time employees, we provide a range of comprehensive health plans, leading well-being programs, and financial protection for you and your family through a range of global and localized plans throughout the world. For more information on Sprinklr Benefits around the world, head to https://sprinklrbenefits.com/ to browse our country-specific benefits guides. We focus on our mission: We founded Sprinklr with one mission: to enable every organization on the planet to make their customers happier. Our vision is to be the world’s most loved enterprise software company, ever. We believe in our product: Sprinklr was built from the ground up to enable a brand’s digital transformation. Its platform provides every customer-facing team with the ability to reach, engage, and listen to customers around the world. At Sprinklr, we have many of the world's largest brands as our clients, and our employees have the opportunity to work closely alongside them. We invest in our people: At Sprinklr, we believe every human has the potential to be amazing. We empower each Sprinklrite in the journey toward achieving their personal and professional best. For wellbeing, this includes daily meditation breaks, virtual fitness, and access to Headspace. We have continuous learning opportunities available with LinkedIn Learning and more. EEO - Our philosophy: Our goal is to ensure every employee feels like they belong and are operating in a judgment-free zone regardless of gender, race, ethnicity, age, and lifestyle preference, among others. We value and celebrate diversity and fervently believe every employee matters and should be respected and heard. We believe we are stronger when we belong because collectively, we’re more innovative, creative, and successful. Sprinklr is proud to be an equal-opportunity workplace and is an affirmative-action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. See also Sprinklr’s EEO Policy and EEO is the Law. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Senior Network Engineer - Tier 4 Work mode: Hybrid Job Summary: We are seeking a highly skilled and experienced Senior Network Engineer (Tier 4) with deep expertise in Juniper routing and switching , Fortinet firewall configuration and management , and enterprise network architecture . This role is critical in designing, implementing, and supporting complex network infrastructures for large-scale enterprise environments. Key Responsibilities: Lead the design, deployment, and optimization of enterprise network solutions using Juniper and Fortinet technologies. Serve as the highest-level escalation point for complex network issues (Tier 4 support). Architect and implement secure, scalable, and resilient network infrastructures. Configure and manage Fortinet firewalls (FortiGate, FortiManager, FortiAnalyzer). Design and maintain Juniper-based routing and switching environments (MX, EX, QFX series). Collaborate with cross-functional teams to align network strategies with business goals. Conduct network assessments, performance tuning, and capacity planning. Develop and maintain detailed network documentation, diagrams, and SOPs. Mentor junior engineers and provide technical leadership across projects. Stay current with emerging technologies and recommend improvements. Required Qualifications: Certifications: JNCIA-Junos (Juniper Networks Certified Associate) NSE 4 (Fortinet Network Security Expert Level 4) Technical Expertise: Advanced knowledge of Juniper routing and switching (OSPF, BGP, MPLS, VXLAN, EVPN). Expert-level experience with Fortinet firewall configuration, policies, VPNs, and UTM features. Strong understanding of enterprise network design principles and best practices. Proficiency in network monitoring, troubleshooting, and performance analysis tools. Familiarity with automation and scripting (Python, Ansible) is a plus. Experience: 8+ years of hands-on experience in network engineering roles. Proven track record in designing and supporting large-scale enterprise networks. Experience in high-availability and disaster recovery network planning. Preferred Skills: Additional Juniper certifications (e.g., JNCIS, JNCIP, JNCIE). Experience with SD-WAN, cloud networking (AWS, Azure), and NAC solutions. Knowledge of ITIL processes and change management. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 1 day ago

Apply

4.0 - 7.0 years

0 Lacs

Mulshi, Maharashtra, India

On-site

Linkedin logo

Area(s) of responsibility 4-7 years’ experience in PTC Windchill and Thingworx Customization & Configuration. Experienced in: Solution Design, Windchill Customization Debugging, Windchill Development Fundamentals, Documentation, Software Testing, Software Maintenance, Software Performance Tuning, Strong product development methodology and tools experience including agile methods, source management, problem resolution, automated testing, DevOps, CICD, GITHUB, SVN etc. Technical competences: (Required) Windchill Application Skilled in basic and advanced Java, Webservices, JavaScript, Shell scripting, SQL, HTML, CSS. Knowledge of Windchill implementation in basic modules is must Very skilled in PTC Windchill - PDM Link customization, XML, Database(SQL) programming In depth knowledge and good experience in JAVA, J2EE, JSP, Java Script Good understanding of basic PLM processes like BOM Management, Part Management, Document Management, EBOM, MBOM. Basic knowledge of UML, Unix – administration Have a strong business focus and is dedicated to meeting the expectations and requirements of the business Ability to translate and balance functional and non-functional business requirements into solutions, i.e Work with customers to translate high-level business requirements into detailed functional specifications, and manage changes to the specifications to support impacted business functions and systems Good communication & presentation skills are a requirement Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing. "DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC “The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. About the Role You will lead the design and implementation of scalable, secure, and highly available infrastructure across both cloud and on-premise environments. This role demands a deep understanding of Linux systems, infrastructure automation, and performance tuning, especially in high-performance computing (HPC) setups. As a technical leader, you’ll collaborate closely with development, QA, and operations teams to drive DevOps best practices, tool adoption, and overall infrastructure reliability. Key Responsibilities: • Design, build, and maintain Linux-based infrastructure across cloud (primarily AWS) and physical data centers. • Implement and manage Infrastructure as Code (IaC) using tools such as CloudFormation, Terraform, Ansible, and Chef. • Develop and manage CI/CD pipelines using Jenkins, Git, and Gerrit to support continuous delivery. • Automate provisioning, configuration, and software deployments with Bash, Python, Ansible, etc. • Set up and manage monitoring/logging systems like Prometheus, Grafana, and ELK stack. • Optimize system performance and troubleshoot critical infrastructure issues related to networking, filesystems, and services. • Configure and maintain storage and filesystems including ext4, xfs, LVM, NFS, iSCSI, and potentially Lustre. • Manage PXE boot infrastructure using Cobbler/Kickstart, and create/maintain custom ISO images. • Implement infrastructure security best practices, including IAM, encryption, and firewall policies. • Act as a DevOps thought leader, mentor junior engineers, and recommend tooling and process improvements. • Maintain clear and concise documentation of systems, processes, and best practices. Collaborate with cross-functional teams to ensure reliable and scalable application delivery. Required Skills & Experience • 5+ years of experience in DevOps, SRE, or Infrastructure Engineering. • Deep expertise in Linux system administration, especially around storage, networking, and process control. • Strong proficiency in scripting (e.g., Bash, Python) and configuration management tools (Chef, Ansible). • Proven experience in managing on-premise data center infrastructure, including provisioning and PXE boot tools. • Familiar with CI/CD systems, Agile workflows, and Git-based source control (Gerrit/GitHub). • Experience with cloud services, preferably AWS, and hybrid cloud models. • Knowledge of virtualization (e.g., KVM, Vagrant) and containerization (Docker, Podman, Kubernetes). • Excellent communication, collaboration, and documentation skills Nice to Have • Hands-on with Lustre or other distributed/parallel filesystems. • Experience in HPC (High-Performance Computing) environments. • Familiarity with Kubernetes deployments in hybrid clusters Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

Remote

Linkedin logo

Technical Engineer We are looking for a knowledgeable Linux- Sr. Technical engineer to join our team. The ideal candidate will be responsible for overseeing & ensuring the stability and performance of our Linux systems and providing high-level technical assistance to our clients. Responsibilities Troubleshoot and resolve complex Linux system issues in a timely manner. Monitor system performance, availability, and security of Linux environments. Develop and implement support procedures and best practices for Linux systems. Collaborate with cross-functional teams to enhance system reliability and performance. Provide training and support to junior team members on Linux technologies. Manage escalated support tickets and ensure client satisfaction. Stay updated on industry trends and advancements in Linux technologies. Technical Skills System Administration: Deep knowledge of Linux/Unix systems, including installation, configuration, and maintenance. Scripting Languages: Proficiency in Bash, Python, Perl, or other scripting languages for automation. Network Management: Understanding of networking protocols, firewall configurations, and network troubleshooting. Security: Knowledge of security best practices, including firewalls, intrusion detection systems, and access control. Virtualization and Cloud Computing: Experience with virtualization (KVM, VMware) and cloud platforms (AWS, Azure, GCP). Configuration Management: Familiarity with tools like Ansible, Puppet, or Chef for automating and managing configurations. Monitoring and Performance Tuning: Ability to monitor system performance and optimize system resources using tools like Nagios, Zabbix, or Prometheus. Storage Management: Understanding of different storage solutions and file systems, including RAID, LVM, and SAN/NAS. Containerization: Experience with Docker and Kubernetes for managing containerized applications. Requirements Bachelor's degree in Computer Science, Information Technology, or related field. 8+ years of experience in a Linux support role. Strong knowledge of Linux operating systems and server administration. Experience with scripting languages and automation tools. Excellent problem-solving skills and the ability to work under pressure. Strong communication and interpersonal skills. Benefits Competitive salary and performance-based bonuses. Comprehensive health, dental, and vision insurance plans. Retirement savings plan with company contributions. Professional development opportunities and training. Flexible working hours and remote work options. If you are a passionate Linux professional with a desire to lead and support a talented team, we encourage you to apply for this position. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We're hiring for Teradata Administrator with US product based company with Pune location ,Permanent Opportunity Location: Hinjewadi Phase II (hybrid) Shift: 9:30 PM - 6:30 AM IST (Indian Standard Time). Night Shift Experience - 3+ years Are you a database expert with a passion for high-impact infrastructure work and a strong command of enterprise systems? We’re looking for a Physical Database Architect who can help design, build, optimize, and support mission-critical database environments. What You'll Do: • Translate logical data models into robust, scalable physical database architectures • Drive physical database design, deployment, performance tuning, and security configuration • Serve as the primary development database contact, collaborating with application teams and production DBAs • Support incident resolution, performance troubleshooting, and proactive monitoring • Align IT infrastructure with business strategies by partnering with BAs, architects, and development teams • Provide technical consultation on infrastructure planning and implementation • Evaluate service options, recommend improvements, and ensure designs meet enterprise architecture standards Required Experience: ✔️ 2+ years working with Teradata database technologies ✔️ 2+ years of experience in database performance tuning and troubleshooting ✔️ 2+ years of hands-on SQL or similar query language use ✔️ 1+ years working with database monitoring tools such as Foglight or equivalents ✔️ 2+ years supporting development projects ✔️ 1+ years of experience in database administration What You Bring: • Strong technical foundation in database and infrastructure design • Excellent cross-functional collaboration skills with a focus on delivery and uptime • Proactive mindset with strong problem-solving and performance analysis abilities • Commitment to continuous improvement, documentation, and best practices If you’re ready to make an impact by driving scalable, reliable database solutions—we want to hear from you! Kindly share updated cv on rakhee.su@peoplefy.com Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for SSIS Developer having experience in maintaining ETL solutions using Microsoft SQL Server Integration Services (SSIS) . The candidate should have extensive hands-on experience in data migration , data transformation , and integration workflows between multiple systems, including preferred exposure to Oracle Cloud Infrastructure (OCI) .. No. of Resources Required: 2 (1 resource with 5+ years exp and 1 resource with 3+ years exp). Please find the below JD for data migration role requirement. Job Description: We are looking for a highly skilled and experienced Senior SSIS Developer to design, develop, deploy, and maintain ETL solutions using Microsoft SQL Server Integration Services (SSIS) . The candidate should have extensive hands-on experience in data migration , data transformation , and integration workflows between multiple systems, including preferred exposure to Oracle Cloud Infrastructure (OCI) . Job Location : Corporate Office, Gurgaon Key Responsibilities: Design, develop, and maintain complex SSIS packages for ETL processes across different environments. Perform end-to-end data migration from legacy systems to modern platforms, ensuring data quality, integrity, and performance. Work closely with business analysts and data architects to understand data integration requirements. Optimize ETL workflows for performance and reliability, including incremental loads, batch processing, and error handling. Schedule and automate SSIS packages using SQL Server Agent or other tools. Conduct root cause analysis and provide solutions for data-related issues in production systems. Develop and maintain technical documentation, including data mapping, transformation logic, and process flow diagrams. Support integration of data between on-premises systems and Oracle Cloud (OCI) using SSIS and/or other middleware tools. Participate in code reviews, unit testing, and deployment support. Education: Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent practical experience). Required Skills: 3-7 years of hands-on experience in developing SSIS packages for complex ETL workflows . Strong SQL/T-SQL skills for querying, data manipulation, and performance tuning. Solid understanding of data migration principles , including historical data load, data validation, and reconciliation techniques. Experience in working with various source/target systems like flat files, Excel, Oracle, DB2, SQL Server, etc. Good knowledge of job scheduling and automation techniques. Preferred Skills: Exposure or working experience with Oracle Cloud Infrastructure (OCI) – especially in data transfer, integration, and schema migration. Familiarity with on-premises-to-cloud and cloud-to-cloud data integration patterns. Knowledge of Azure Data Factory, Informatica, or other ETL tools is a plus. Experience in .NET or C# for custom script components in SSIS is advantageous. Understanding of data warehousing and data lake concepts. If interested, Kindly revert back with resume along and below mentioned details to amit.ranjan@binarysemantics.com Total Experience.: Years of Experience in SSIS Development: Years of Experience in maintaining ETL Solution using SSIS: Years of Experience in Data Migration / Data transformation, and integration workflows between multiple systems: Years of Experience in Oracle Cloud Infrastructure (OCI) Current Location: Home town: Reason of change: Minimum Joining Time: Regards, Amit Ranjan Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 1 day ago

Apply

Exploring Tuning Jobs in India

The job market for tuning professionals in India is constantly growing, with many companies actively seeking skilled individuals to optimize and fine-tune their systems and applications. Tuning jobs can be found in a variety of industries, including IT, software development, and data management.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi-NCR

These cities are known for their thriving tech industries and offer numerous opportunities for tuning professionals.

Average Salary Range

The average salary range for tuning professionals in India varies based on experience and location. Entry-level roles may offer salaries starting from INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.

Career Path

In the field of tuning, a typical career path may include progression from Junior Tuning Specialist to Senior Tuning Engineer, and eventually to Lead Tuning Architect. With experience and expertise, professionals can take on more challenging projects and leadership roles within organizations.

Related Skills

In addition to tuning skills, professionals in this field are often expected to have knowledge in areas such as database management, performance optimization, troubleshooting, and scripting languages like SQL or Python.

Interview Questions

  • What is query tuning and why is it important? (basic)
  • Explain the difference between indexing and partitioning. (medium)
  • How would you troubleshoot a slow-performing database query? (medium)
  • Can you discuss a challenging tuning project you've worked on in the past? (advanced)
  • What tools do you use for performance monitoring and tuning? (basic)
  • How do you approach performance tuning for a web application? (medium)
  • What factors can impact the performance of a database server? (basic)
  • Describe your experience with query execution plans. (medium)
  • How would you handle a sudden spike in database traffic affecting performance? (advanced)
  • Have you worked with NoSQL databases for tuning purposes? If so, can you provide an example? (advanced)
  • Explain the concept of query optimization and its importance in database tuning. (medium)
  • What are some common performance issues in a distributed system, and how would you address them? (advanced)
  • How do you stay updated on the latest trends and best practices in database tuning? (basic)
  • Can you discuss a scenario where you had to balance between performance optimization and data integrity? (advanced)
  • What role does indexing play in database tuning, and how do you determine the most effective indexing strategy? (medium)
  • How do you approach tuning for both read and write operations in a database system? (medium)
  • What are some key metrics you would monitor to assess the performance of a database server? (basic)
  • How would you handle a situation where a database query is causing a deadlock? (advanced)
  • Describe a scenario where you had to optimize a query for a large dataset. (medium)
  • Have you worked with caching mechanisms for performance tuning? If so, discuss your experience. (advanced)
  • How do you prioritize performance tuning tasks when working on multiple projects simultaneously? (medium)
  • Can you explain the concept of query cost and its significance in query tuning? (medium)
  • How do you approach tuning for a system that experiences fluctuations in workload throughout the day? (advanced)
  • What steps would you take to optimize the performance of a database server during a migration to a new platform? (advanced)

Closing Remark

As you navigate the job market for tuning roles in India, remember to showcase your expertise, stay updated on industry trends, and prepare thoroughly for interviews. With the right skills and mindset, you can land a rewarding career in this dynamic field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies