Jobs
Interviews

18519 Tuning Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Job Title: Sr. Node.js Developer Experience: 5–7 years Location: On-site – Infocity, Gandhinagar Employment Type: Full-time Job Summary We are seeking a highly skilled and experienced Sr. Node.js Developer to join our growing development team. The ideal candidate will have strong expertise in backend development using Node.js along with a deep understanding of database technologies (both SQL and NoSQL). You’ll be responsible for designing, building, and maintaining scalable APIs and backend systems. Key Responsibilities Design, develop, and maintain RESTful APIs and backend services using Node.js. Optimize applications for maximum speed and scalability. Integrate with various database systems (e.g., MongoDB, MySQL, PostgreSQL). Collaborate with front-end developers and product teams to deliver high-quality solutions. Write clean, modular, and reusable code following best practices. Conduct code reviews and provide mentorship to junior developers. Identify performance bottlenecks and suggest optimizations. Implement and manage third-party integrations. Ensure application security and data protection practices. Required Skills & Qualifications 5–7 years of experience in backend development using Node.js. Proficient in JavaScript (ES6+), Express.js, and asynchronous programming. Strong knowledge of database systems like MongoDB, MySQL, PostgreSQL. Hands-on experience with ORMs and query builders (e.g., Sequelize, Mongoose, Knex). Good understanding of API security, authentication (JWT, OAuth), and performance tuning. Familiarity with version control systems (Git). Experience with Docker, Redis, or other caching mechanisms is a plus. Ability to write unit/integration tests using Mocha, Jest, or similar tools. Experience with CI/CD pipelines and automation using tools like GitHub Actions, GitLab CI, Jenkins, etc. Hands-on experience deploying and managing applications on AWS (EC2, S3, RDS, Lambda, etc.). Familiarity with containerization (Docker) and orchestration platforms like Kubernetes . Strong grasp of version control systems (Git) and collaborative development practices. Bachelor's degree in Computer Science , Information Technology , or a related technical field. Key Note - E xperience in Agile/Scrum development environments. Strong analytical and debugging skills with a keen attention to detail. Excellent communication and team collaboration abilities. Locals preferred from Gandhinagar or Ahmedabad.

Posted 4 days ago

Apply

4.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for experienced SAP ABAP Consultants who have strong implementation experience and are capable of delivering complex customizations and integrations. The ideal candidate will have experience working across SAP modules and multi-technology environments, with the ability to contribute to architecture, development, and deployment. Key Responsibilities: Design, develop, and implement custom ABAP programs as per business requirements Work on SAP implementation and support projects across modules (SD, MM, FI, HR, etc.) Develop interfaces and integrate SAP with external systems (e.g., ERP, CRM, Mobile Portals) Collaborate with functional teams to understand business processes and provide technical solutions Participate in technical architecture discussions and propose scalable solutions Handle performance tuning, debugging, and troubleshooting of custom developments Ensure adherence to SAP development standards and best practices Travel to client locations for workshops, implementation support, and go-lives as needed Key Requirements: 4 to 15 years of hands-on experience in SAP ABAP Development Strong experience with Reports, Interfaces, Enhancements, Forms, Workflows, and Data Migration Experience in SAP implementation projects is a must Proven experience with system integration using technologies like RFCs, BAPIs, IDOCs, Web Services, etc. Understanding of multi-system environments (ERP, CRM, SAP Fiori, Mobile Portals) Ability to work in cross-functional teams and communicate with business users Willingness to travel to client locations when required Nice to Have: Experience in S/4HANA ABAP Knowledge of OData, CDS Views, and Fiori development Exposure to Agile methodologies

Posted 4 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About the Role We are looking for a highly competent and detail-oriented Database Administrator with 3–5 years of experience in SQL Server environments. The ideal candidate should have hands-on experience with performance tuning, backup strategies, query optimisation, and maintaining high availability. You will play a critical role in ensuring database stability, scalability, and security across business applications. Key Responsibilities Manage and maintain Microsoft SQL Server databases (2016 and later) across development, UAT, and production environments. · Monitor and improve database performance using Query Store, Extended Events, and Dynamic Management Views (DMVs). · Design and maintain indexes, partitioning strategies, and statistics to ensure optimal performance. · Develop and maintain T-SQL scripts, views, stored procedures, and triggers. · Implement robust backup and recovery solutions using native SQL Server tools and third-party backup tools (if applicable). · Ensure business continuity through high-availability configurations such as Always On Availability Groups, Log Shipping, or Failover Clustering. · Perform database capacity planning and forecast growth requirements. · Ensure SQL Server security by managing logins, roles, permissions, and encryption features like TDE. · Collaborate with application developers for schema design, indexing strategies, and performance optimization. · Handle deployments, patching, and version upgrades in a controlled and documented manner. · Maintain clear documentation of database processes, configurations, and security policies. Required Skills & Qualifications · Bachelor’s degree in Computer Science, Engineering, or related field. · 3–5 years of solid experience with Microsoft SQL Server (2016 or later). · Strong command of T-SQL including query optimisation, joins, CTEs, window functions, and error handling. · Proficient in interpreting execution plans, optimising long-running queries, and using indexing effectively. · Understanding of SQL Server internals such as page allocation, buffer pool, and lock escalation. · Hands-on experience with backup/restore strategies and consistency checks (DBCC CHECKDB). · Experience with SQL Server Agent Jobs, alerts, and automation scripts (PowerShell or T-SQL). · Ability to configure and manage SQL Server high-availability features. · Exposure to tools like Redgate SQL Monitor, SolarWinds DPA, or similar is a plus. Nice to Have · Exposure to Azure SQL Database or cloud-hosted SQL Server infrastructure. · Basic understanding of ETL workflows using SSIS. · Microsoft Certification: MCSA / Azure Database Administrator Associate or equivalent. · Experience with database deployments in CI/CD pipelines. Key Traits · Analytical and structured problem-solver. · High attention to detail and data consistency. · Proactive mindset with ownership of deliverables. · Strong verbal and written communication skills.

Posted 5 days ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Primary skills:Domain->User Experience Design->Interaction design,Technology->Cloud Platform->Microservices,Technology->Java->Java - ALL,Technology->Java->Springboot,Technology->Reactive Programming->react JS A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Develop and deliver web-based applications on time and with very high code quality. Work closely with creative team to ensure high fidelity implementations. Analyze and recommend performance optimizations of the code and ensure code complies with security best practices. Debug and troubleshoot problems in live applications. Experience in building scalable and high performance software systems Strong knowledge about HTTP, Web Performance, SEO and Web Standards Strong knowledge about Web Services, REST, MapReduce, Message Queues, Caching Strategies Strong knowledge about database management systems including RDBMS, NoSQL etc. Good knowledge about Web Security, Cryptography and Security Compliance like PCI, PII etc. Scope and draft a Technical Design Document and a Technical Requirements Document. Create detailed technical requirements with architectural guidance on how to implement. Collaborate with the project manager to develop sound technical requirements that clearly outlines the architecture and tasks required to develop a project. Reviewing code written by developers. Define stories & tasks for the implementations, assigning for the closure for the tasks Knowledge of architectural design patterns, performance tuning, database and functional designs Hands-on experience in Service Oriented Architecture Ability to lead solution development and delivery for the design solutions Experience in designing high level and low level documents is a plus Good understanding of SDLC is a pre-requisite Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Team Management Communication Technical strength in mobile and web technologies Problem-solving Strong front-end development Decision-making

Posted 5 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

GenAI Solution Architect Responsibilities Design comprehensive architectures for GenAI applications utilizing Azure OpenAI LLM. Integrate RAG systems to enhance AI applications with contextually relevant data retrieval capabilities. Ensure solutions are scalable, cost-effective, and aligned with business objectives. Oversee the development and deployment of GenAI solutions on Azure. Coordinate with cross-functional teams including developers, client technical and business teams. Work closely with stakeholders to define technical requirements and business goals. Present complex technical information in a clear and concise manner to non-technical stakeholders. Requirements 10+ years experience with 5+ years of experience in AI and machine learning, with a strong focus on solution architecture. Should have hands-on experience in deploying at least one end to end GenAI project. Proven experience with Azure OpenAI LLMs and its application in enterprise environments. Hands-on experience with Retrieval-Augmented Generation (RAG) systems. Expertise in Azure cloud services and architecture. Strong programming skills in Python and familiarity with AI development tools and libraries. Knowledge of AI model training, fine-tuning, and deployment processes. Excellent problem-solving and analytical skills. Strong leadership and project management abilities. Effective communication and interpersonal skills.

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.

Posted 5 days ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Company Description Crazy Cub Animation Studio Pvt. Ltd., based in New Delhi, India, is a well-established animation production facility renowned for delivering over 5000 minutes of quality animated content to both national and international markets. With a motivated team of talented artists, Crazy Cub is at the forefront of the industry. Our YouTube channel, Jugnu Kids, is setting new standards in children's content, attracting mass audiences from across the globe. Role Description This is a full-time on-site role located in Delhi, India, for an AI Artist. The AI Artist will be responsible for using artificial intelligence tools and techniques to create, enhance, and edit animated content. Day-to-day tasks include developing AI models for animation, collaborating with the team to integrate AI solutions into various projects, and fine-tuning AI-generated content to meet quality standards. The role also involves staying updated with the latest developments in AI and animation to continuously improve the studio's offerings. Qualifications Proficiency in using AI tools and techniques for animation Experience in developing and fine-tuning AI models Strong skills in animation and editing software Excellent collaboration and communication skills Ability to work on-site in Delhi, India Experience in the animation industry is a plus Bachelor's degree in Computer Science, Animation, or related field

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

About the Role We are seeking a highly motivated and creative Platform Engineer with a true research mindset. This is a unique opportunity to move beyond traditional development and step into a role where you will ideate, prototype, and build production-grade applications powered by Generative AI. You will be a core member of a platform team, responsible for developing both internal and customer-facing solutions that are not just functional but intelligent. If you are passionate about the MERN stack, Python, and the limitless possibilities of Large Language Models (LLMs), and you thrive on building things from the ground up, this role is for you. Core Responsibilities Innovate and Build: Design, develop, and deploy full-stack platform applications integrated with Generative AI, from concept to production. AI-Powered Product Development: Create and enhance key products such as: Intelligent chatbots for customer service and internal support. Automated quality analysis and call auditing systems using LLMs for transcription and sentiment analysis. AI-driven internal portals and dashboards to surface insights and streamline workflows. Full-Stack Engineering: Write clean, scalable, and robust code across the MERN stack (MongoDB, Express.js, React, Node.js) and Python. Gen AI Integration & Optimization: Work hands-on with foundation LLMs, fine-tuning custom models, and implementing advanced prompting techniques (zero-shot, few-shot) to solve specific business problems. Research & Prototyping: Explore and implement cutting-edge AI techniques, including setting up systems for offline LLM inference to ensure privacy and performance. Collaboration: Partner closely with product managers, designers, and business stakeholders to transform ideas into tangible, high-impact technical solutions. Required Skills & Experience Experience: 2-5 years of professional experience in a software engineering role. Full-Stack Proficiency: Strong command of the MERN stack (MongoDB, Express.js, React, Node.js) for building modern web applications. Python Expertise: Solid programming skills in Python, especially for backend services and AI/ML workloads. Generative AI & LLM Experience (Must-Have): Demonstrable experience integrating with foundation LLMs (e.g., OpenAI API, Llama, Mistral, etc.). Hands-on experience building complex AI systems and implementing architectures such as Retrieval-Augmented Generation (RAG) to ground models with external knowledge. Practical experience with AI application frameworks like LangChain and LangGraph to create agentic, multi-step workflows. Deep understanding of prompt engineering techniques (zero-shot, few-shot prompting). Experience or strong theoretical understanding of fine-tuning custom models for specific domains. Familiarity with concepts or practical experience in deploying LLMs for offline inference . R&D Mindset: A natural curiosity and passion for learning, experimenting with new technologies, and solving problems in novel ways. Bonus Points (Nice-to-Haves) Cloud Knowledge: Hands-on experience with AWS services (e.g., EC2, S3, Lambda, SageMaker).

Posted 5 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking talented individuals who can provide an exceptional experience for our company and clients. 𝐈𝐟 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐲𝐨𝐮 𝐡𝐚𝐯𝐞 𝐰𝐡𝐚𝐭 𝐢𝐭 𝐭𝐚𝐤𝐞𝐬 𝐭𝐨 𝐣𝐨𝐢𝐧 𝐨𝐮𝐫 𝐭𝐞𝐚𝐦, 𝐭𝐡𝐢𝐬 𝐦𝐢𝐠𝐡𝐭 𝐛𝐞 𝐲𝐨𝐮𝐫 𝐜𝐡𝐚𝐧𝐜𝐞! 🔹 Job Role: SQL Developer 🔹 Experience Required: 3+ Years Required Skills: -> 3+ years of experience as an SQL Developer or similar role. ->Proven experience with Microsoft SQL Server 2017, 2019, and 2022. ->Strong knowledge of T-SQL, SSIS (SQL Server Integration Services), and SSRS (SQL Server Reporting Services). ->Experience with database performance tuning and optimization. ->Familiarity with ETL tools and processes. ->Experience in the FinTech industry or working with financial data is a plus. 📍 Location: Noida, Sector 63 📍 Mode: Work from Office only Interested candidates can share their CV at 📩tanya@yoekisoft.com Let’s Connect!

Posted 5 days ago

Apply

6.0 - 10.0 years

35 - 38 Lacs

Ahmedabad, Gujarat, India

On-site

The Role: Lead I Software Engineer The Location: Hyderabad/Ahmedabad, India The Team: We are looking for highly motivated, enthusiastic and skilled software engineer with experience in architecting and building solutions to join an agile scrum team developing technology solutions. The team is responsible for developing and ingesting various datasets into the product platforms utilizing latest technologies. The Impact: Contribute significantly to the growth of the firm by:  Developing innovative functionality in existing and new products  Supporting and maintaining high revenue products  Achieve the above intelligently and economically using best practices. What's in it for you:  Build a career with a global company.  Work on products that fuels the global financial markets.  Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities:  Architect, design, and implement software related projects.  Perform analysis and articulate solutions.  Manage and improve existing solutions.  Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits.  Collaborate effectively with technical and non-technical stakeholders  Active participation in all scrum ceremonies following Agile principles and best practices What We're Looking For: Basic Qualifications:  Bachelor's degree in computer science or equivalent  6 to 10 years' experience in application development  Willingness to learn and apply new technologies.  Excellent communication skills are essential, with strong verbal and writing proficiencies.  Good work ethic, self-starter, and results-oriented  Excellent problem-solving & troubleshooting skills  Ability to manage multiple priorities efficiently and effectively within specific timeframes  Strong hands-on development experience in C#, python  Strong hands on experience in building large scale solutions using big data technology stack like Spark and microservice architecture and tools like Docker and Kubernetes.  Experience in conducting application design and code reviews  Able to demonstrate strong OOP skills  Proficient with software development lifecycle (SDLC) methodologies like Agile, Test- driven development.  Experience implementing Web Services  Have experience working with SQL Server. Ability to write stored procedures, triggers, performance tuning etc  Experience working in cloud computing environments such as AWS Prefered Qualifications:  Experience with large scale messaging systems such as Kafka is a plus  Experience working with Big data technologies like Elastic Search, Spark is a plus  Experience working with Snowflake is a plus  Experience with Linux based environment is a plus

Posted 5 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AlgoAnalytics is looking for a Data Scientist who has the passion for AI/ML solution development and cutting-edge work in Gen AI. Minimum 3 years of experience in AI/MI including Deep Learning, and exposure to Gen AI/LLM is required along with tools/techs/libraries like Agentic AI, Dify/Autogen, Langraphetc. Detailed understanding of Cloud technologies and Deployment with good hands-on experience is an added advantage. Key Responsibilities: AI/ML & Gen AI Development: Develop Gen AI/LLM models using OpenAI, Llama, or other open-source/offline models. Utilize Agentic AI frameworks and tools like Dify, Autogen, Langraph to build intelligent systems. Perform prompt engineering, fine-tuning, and dataset-specific model optimization. Implement cutting-edge research from AI/ML and Gen AI domains. Work on cloud-based deployments of AI/ML models for scalability and production readiness. Leadership & Mentoring: Troubleshoot AI/ML, Gen AI, and cloud deployment challenges. Mentor junior team members and contribute to their skill development. Identify team members for mentoring, hiring, and technical initiatives. Ensure smooth project execution, deadline management, and client interactions. Research & Innovation: Explore and implement recent AI/ML research in practical applications. Contribute to research publications, internal knowledge-sharing, and AI innovation. Qualifications: 2-3 years of hands-on experience in Machine Learning, Deep Learning, and Gen AI/LLMs. Bachelors/Masters degree in Computer Science, Engineering, Mathematics, or Statistics (with strong programming knowledge). Skills: Programming: Strong expertise in Python and relevant ML/AI libraries. AI/ML & Gen AI: Deep understanding of Machine Learning, Deep Learning, and Generative AI/LLMs. Agentic AI & Automation: Experience with Dify, Autogen, Langraph, and similar tools. Cloud & Deployment: Knowledge of cloud platforms (AWS, GCP, Azure) and MLOps deployment pipelines. Communication & Leadership: Strong ability to manage teams, meet project deadlines, and collaborate with clients. Note - We would prefer Pune based candidates. Candidates joining immediately will be preferred.

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

Dear All, We are seeking a highly capable Machine Learning Engineer with deep expertise in fine-tuning Large Language Models (LLMs) and Vision-Language Models (VLMs) for intelligent document processing. This role requires strong knowledge of PEFT techniques (LoRA, QLoRA), quantization , transformer architectures , prompt engineering , and orchestration frameworks like LangChain . You’ll work on building and scaling end-to-end document processing workflows using both open-source and commercial models (OpenAI, Google, etc.), with an emphasis on performance, reliability, and observability. Key Responsibilities: Fine-tune and optimize open-source and commercial LLMs/VLMs (e.g., LLaMA,Cohere, Gemini, GPT-4) for structured and unstructured document processing tasks. Apply advanced PEFT techniques (LoRA, QLoRA) and model quantization to enable efficient deployment and experimentation. Design LLM-based document intelligence pipelines for tasks like OCR extraction, entity recognition, key-value pairing, summarization, and layout understanding. Develop and manage prompting techniques (zero-shot, few-shot, chain-of-thought, self-consistency) tailored to document use-cases. Implement LangChain -based workflows integrating tools, agents, and vector stores for RAG-style processing. Monitor experiments and production models using Weights & Biases (W&B) or similar ML observability tools. Work with OpenAI (GPT series) , Google PaLM / Gemini , and other LLM/VLM APIs for hybrid system design. Collaborate with cross-functional teams to deliver scalable, production-ready ML systems and continuously improve model performance. Build reusable, well-documented code and maintain a high standard of reproducibility and traceability. Required Skills & Experience: Hands-on experience with transformer architectures and libraries like HuggingFace Transformers. Deep knowledge of fine-tuning strategies for large models, including LoRA , QLoRA , and other PEFT approaches. Experience in prompt engineering and developing advanced prompting strategies. Familiarity with LangChain , vector databases (e.g., FAISS, Pinecone), and tool/agent orchestration. Strong applied knowledge of OpenAI , Google (Gemini/PaLM) , and other foundational LLM/VLM APIs. Proficiency in model training, tracking, and monitoring using tools like Weights & Biases (W&B) . Solid understanding of deep learning , machine learning , natural language processing , and computer vision concepts. Experience working with document AI models (e.g., LayoutLM, Donut, Pix2Struct) and OCR tools (Tesseract, EasyOCR, etc.). Proficient in Python , PyTorch , and related ML tooling. Nice-to-Have: Experience with multi-modal architectures for document + image/text processing. Knowledge of RAG systems , embedding models , and custom vector store integrations. Experience in deploying ML models via FastAPI , Triton , or similar frameworks. Contributions to open-source AI tools or model repositories. Exposure to MLOps , CI/CD pipelines , and data versioning. Qualifications: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Why Join Us? Work on cutting-edge GenAI and Document AI use-cases. Collaborate in a fast-paced, research-driven environment. Flexible work arrangements and growth-focused culture. Opportunity to shape real-world applications of LLMs and VLMs.

Posted 5 days ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Company Description Translab is a global IT consulting and solutions company dedicated to helping enterprises in BFSI, Utilities, Healthcare, and Public Sectors tackle their biggest challenges. With over 300 technologists and domain experts across 7 global centers in 5 countries (UK, US, Singapore, Canada, and India), we provide a broad spectrum of IT solutions and services. Our expertise ranges from cloud solutions to DevOps, databases to analytics, performance tuning to managed services. We have successfully led over 150 transformative journeys for various organizations, ensuring superlative performance and quicker success. Role Description This is a full-time on-site role for a VMware Admin L1, located in Kochi. The VMware Admin L1 will be responsible for daily troubleshooting, server administration, infrastructure management, and providing technical support. This role involves working with operating systems and ensuring that they run efficiently, resolving any issues that may arise. Qualifications Troubleshooting skills for diagnosing and resolving system issues Experience in Server Administration and managing server infrastructures Knowledge of Infrastructure management and related technologies Technical Support skills to assist users and maintain system integrity Proficiency in Operating Systems, particularly those used in enterprise environments Excellent problem-solving abilities and attention to detail Ability to work independently as well as in team settings Bachelor's degree in Computer Science, Information Technology, or related field is a plus

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

The world’s top banks use Zafin’s integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin’s platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. What’s the opportunity? We are looking for a FinOps Analyst to help design and implement financially optimized, scalable, and secure Azure cloud infrastructure. You will integrate FinOps principles, manage cost-effective cloud operations, and collaborate with engineering teams to ensure financial accountability and resource efficiency. Key Responsibilities • Cloud Cost Monitoring & Analysis o Monitor Azure cost dashboards, usage trends, and budget adherence across multiple subscriptions, accounts, and resource groups. o Analyze granular cloud spend data and provide clear insights into resource-level consumption, highlighting trends, anomalies, and cost drivers. o Identify unusual cost spikes, unused resources, and underutilized services; recommend optimization actions to improve cloud ROI. o Work with engineering and infrastructure teams to align cloud usage with budgeted expectations and suggest tuning of misconfigured or inefficient resources. • Reporting & Insights o Generate regular reports, executive summaries, and visual dashboards on Azure spend, forecasting, and cost optimization metrics. o Support the budgeting and forecasting process for cloud spend with usage-based analytics. o Communicate findings and trends clearly to technical and non-technical stakeholders, including flags for areas of concern, overruns, or budget risks. • Tools & Platforms o Leverage Azure Cost Management and Billing, Azure Advisor, and related Microsoft tools for usage tracking and optimization recommendations. o Explore and propose additional tools and scripts (e.g., Power BI, Cost Explorer APIs, or Excel-based automation) to enhance reporting and alerting capabilities. • Cross-functional Support o Collaborate with cloud operations, DevOps, and engineering teams to implement optimization strategies. o Participate in regular cost review meetings and post-mortem analyses when unexpected cost behavior is observed. Required Skills & Qualifications 3 to 5 years of experience Basic to intermediate understanding of Microsoft Azure cloud infrastructure and services (IaaS, PaaS, tagging, subscription management). Hands-on experience with Azure Cost Management tools and dashboards. Proficiency in analyzing large datasets, identifying cost trends, and presenting actionable insights. Strong Excel skills, with comfort handling pivot tables, VLOOKUP/XLOOKUP, and charts. Analytical mindset with keen attention to detail and a proactive approach to problem-solving. Excellent verbal and written communication skills. Bachelor’s degree in Finance, Computer Science, Engineering, or related field. Preferred Qualifications Exposure to FinOps principles or formal FinOps certification. Experience working with multi-cloud or large-scale enterprise Azure environments. Familiarity with automation or scripting for reporting purposes (e.g., PowerShell, Python, or Azure CLI). Experience with reporting tools like Power BI, Tableau, or Looker. What’s in it for you Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement. Want to learn more about what you can look forward to during your career with us? Visit our careers site and our openings: zafin.com/careers Zafin welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process. Zafin is committed to protecting the privacy and security of the personal information collected from all applicants throughout the recruitment process. The methods by which Zafin contains uses, stores, handles, retains, or discloses applicant information can be accessed by reviewing Zafin’s privacy policy at https://zafin.com/privacy-notice/. By submitting a job application, you confirm that you agree to the processing of your personal data by Zafin described in the candidate privacy notice.

Posted 5 days ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Job Title C++ Developer Contractual | Immediate Joiner | Mumbai (Onsite) Job Description We are hiring for a C++ Developer position for one of our esteemed clients, a leader in device integration and high-performance computing. This is a 1-year contract-based opportunity with the potential for extension or permanent absorption based on performance. The role is onsite at Kandivali, Mumbai, and is open strictly to local candidates who are available to join within 20 days. The ideal candidate will be a passionate and experienced developer with end-to-end C++ development expertise, along with proficiency in CUDA programming, GPU performance tuning, and parallel computing. You will be working on real-time, high-performance applications and GPU-accelerated tasks, collaborating closely with software engineers and researchers to bring innovative solutions to life. Key Responsibilities Develop and maintain high-performance software applications using modern C++ (C++11/14/17/20). Design and implement GPU-accelerated algorithms using CUDA to optimize system performance. Optimize CUDA kernels for memory efficiency, scalability, and overall performance. Conduct code reviews to ensure adherence to best practices and clean coding standards. Create and execute test plans to validate performance and functionality. Analyze and troubleshoot performance bottlenecks using appropriate debugging and profiling tools. Collaborate with cross-functional teams including product development, QA, and RD to deliver robust and reliable software. Provide technical mentorship to junior developers as needed. Maintain clear and comprehensive documentation for all software components and processes. Required Skills Experience 4 to 5 years of hands-on experience in C++ software development. Strong command of modern C++ standards (C++11/14/17/20). Proven experience with CUDA programming and developing GPU-accelerated solutions. Good understanding of parallel computing, multithreading, and GPU architecture. Experience with CUDA optimization techniques: memory hierarchy, streams, warp-level primitives, etc. Familiarity with tools such as Nsight, CUDA Memcheck, or other debugging/profiling utilities. Strong analytical and mathematical skills including knowledge of linear algebra and numerical methods. Job Details Location: Kandivali, Mumbai (Only local candidates will be considered) Work Mode: Work From Office (No hybrid/remote options) Type: Full-time, Contractual 1 Year Notice Period: Immediate joiners or max 20 days Why Apply? Opportunity to work on cutting-edge CUDA and high-performance computing projects. Exposure to a highly technical and collaborative environment. Chance for contract-to-hire based on performance. Note This role is strictly for candidates who are currently based in Mumbai and can join on short notice. Profiles from other locations will not be considered. Key Skills C++, CUDA, GPU Programming, Parallel Computing, Multithreading, Nsight, Memory Optimization, High-Performance Computing, CUDA Kernel Development This job is provided by Shine.com

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Primary skills Required: Mainframes Architect, COBOL, PL/1, CICS, JCL, DB2 Tools: IBM Rational tools (IDz, RDz, RBD, RTC) A day in the life of an Infoscion You would also provide technology consultation and assist in defining scope and sizing of work. You would implement solutions, create technology differentiation and leverage partner technologies. Additionally, you would participate in competency development with the objective of ensuring the best-fit and high quality technical solutions. You would be a key contributor in creating thought leadership within the area of technology specialization and in compliance with guidelines, policies and norms of Infosys. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of architectural design patterns, performance tuning, database and functional designs Hands-on experience in Service Oriented Architecture Ability to lead solution development and delivery for the design solutions Experience in designing high level and low level documents is a plus Good understanding of SDLC is a pre-requisite Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate

Posted 5 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Software Engineer Consultant / Expert 34326 Location: Chennai (Onsite/Hybrid) Employment Type: Contract Budget: Up to ₹24 LPA (Starting at ₹21 LPA) Notice Period: Immediate Joiners Preferred Assessment: Full Stack Backend Java (via Hacker Platform) Position Overview We are seeking a highly experienced Full Stack Java Developer with strong expertise in backend development, cloud technologies, and data solutions. This role involves building and maintaining a global logistics data warehouse on Google Cloud Platform (GCP) , supporting key supply chain operations and enhancing visibility from production to final delivery. The ideal candidate will have a minimum of 6+ years of relevant experience and hands-on skills in BigQuery, Microservices, and REST APIs , with exposure to tools like Pub/Sub, Kafka, and Terraform . Key Responsibilities Collaborate closely with product managers, architects, and engineers to design and implement technical solutions Develop and maintain full-stack applications using Java, Spring Boot, and GCP Cloud Run Build and optimize ETL/data pipelines to apply business logic and transformation rules Monitor and enhance data warehouse performance on BigQuery Support end-to-end testing: unit, functional, integration, and user acceptance Conduct peer reviews, code refactoring, and ensure adherence to best coding practices Implement infrastructure as code and CI/CD using tools like Terraform Required Skills Java, Spring Boot Full Stack Development (Backend-focused) Google Cloud Platform (GCP) – Minimum 1 year hands-on with BigQuery Cloud Run, Microservices, REST APIs Messaging: Pub/Sub, Kafka DevOps & Infrastructure: Terraform Exposure to AI/ML integration is a plus Experience Requirements Minimum 6+ years of experience in Java/Spring Boot development Strong hands-on experience with GCP services, particularly BigQuery Experience in developing enterprise-grade microservices and backend systems Familiarity with ETL pipelines, data orchestration, and performance tuning Agile team collaboration and modern development practices Preferred Experience Exposure to AI agents or AI-driven application features Experience in large-scale logistics or supply chain data systems Education Requirements Bachelor’s Degree in Computer Science, Information Technology, or related field (mandatory) Skills: rest apis,terraform,full stack development,data,google cloud platform (gcp),microservices,kafka,gcp,bigquery,pub/sub,java,cloud run,boot,spring boot

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Security Architect Project Role Description : Define the cloud security framework and architecture, ensuring it meets the business requirements and performance goals. Document the implementation of the cloud security controls and transition to cloud security-managed operations. Must have skills : Palo Alto Networks Firewalls Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: Seeking a technically skilled and proactive Network Security Technical Lead to manage and enhance our enterprise security infrastructure. This role focuses on securing the network perimeter through the administration of Palo Alto Firewalls, Web Application Firewalls (WAF), Bot Protection, Email Security, Endpoint Detection and Response (EDR), and IPS/IDS systems. You will be responsible for firewall policy tuning, VPN support, DNS and IDS/IPS signature management, bot defense rule enforcement, and email threat protection. A key part of the role involves proactively identifying and addressing security gaps, ensuring compliance with internal standards, and continuous improvement through regular audits, service reporting, and cross-functional collaboration. Roles & Responsibilities: -Administer and support enterprise firewall systems, with a focus on Palo Alto platforms. -Perform policy tuning and propose enhancements based on incident trends and evolving threat landscapes. Manage IDS/IPS signature updates, including additions, deletions, and modifications. -Support URL filtering configurations and enforcement. -Provide operational support for VPN services and troubleshoot connectivity issues. -Identify security gaps and recommend remediation strategies as part of continuous improvement. -Conduct quarterly firewall rule audits and generate compliance reports. -Maintain and update operational runbooks and documentation. -Manage patching activities for firewall infrastructure. -Deliver regular service performance reports and participate in incident/problem/change management processes. -Troubleshoot firewall configuration issues, including backup/restore and application break-fixes. -Manage bot protection policies and rules using Cequence. -Configure appropriate logging levels for bot traffic analysis. -Perform troubleshooting and incident support related to bot activity. -Apply and validate standard and emergency rule requests. -Perform regular signature updates to maintain bot defense effectiveness. -Fine-tune DNS policies and implement domain-based filtering using Cloudflare. -Monitor and report on DNS threats weekly/monthly, including actions taken. -Ensure DNS configurations align with enterprise security posture and compliance requirements. -Policy tuning, rule optimization, VPN support, and quarterly audit reporting using Palo Alto; incident-driven configuration backup, restore, and break-fix troubleshooting. -Signature lifecycle management (add/update/delete), URL filtering enforcement, and patch management aligned with incident trends and continuous improvement goals.Policy and rule management, logging configuration, incident triage, and signature updates using Cequence Bot Defense; validation of standard and emergency rule requests. -DNS policy fine-tuning, domain-based filtering, and weekly/monthly threat reporting using Cloudflare DNS.Service reporting, runbook maintenance, and change/problem/incident management across firewall and bot/DNS security layers. -Palo Alto, Cequence (Bot Defense), Cloudflare (DNS). Professional & Technical Skills: -Strong hands-on experience with Palo Alto firewalls and associated security features. -Proficiency with Cequence for Bot protection and Cloudflare for DNS security. -Solid understanding of network security principles, VPNs, IDS/IPS, and URL filtering. -Familiarity with ITIL-based incidents, problems, and change management processes. -Ability to analyze logs and traffic patterns to identify anomalies and optimize rules. -Experience with patch management, service reporting, and compliance audits. -Strong documentation skills and attention to detail. -Experience in cybersecurity operations, with a focus on network and perimeter security. -hands-on experience managing enterprise firewalls, preferably Palo Alto. -experience in Bot protection and DNS security, including tools like Cequence and Cloudflare. -Proven track record in troubleshooting complex firewall and VPN issues in large-scale environments. -Experience conducting firewall audits, rule reviews, and implementing policy enhancements. -Demonstrated ability to manage incident response and change management processes. -Experience working in a global delivery model and collaborating with cross-functional teams. -Strong analytical and problem-solving skills with a continuous improvement Additional Information: - The candidate should have minimum 5 years of experience in Palo Alto Networks Firewalls. - This position is based at our Gurugram office. - A 15 years full time education is required.

Posted 5 days ago

Apply

4.0 years

0 Lacs

New Delhi, Delhi, India

On-site

🧠 NLP Expert – Researcher/Engineer 📅 Experience: 3–4 years 💰 Salary Range: ₹15 – ₹30 LPA About the Role We are seeking an experienced NLP Expert – Researcher/Engineer to lead the design and pretraining of advanced transformer-based language models from scratch using sequential, structured data. This role demands a deep understanding of transformer architectures , strong command over the Hugging Face ecosystem , and the ability to convert complex, real-world time-series or geospatial data into meaningful natural language outputs. You'll be working on cutting-edge applications that require bridging non-traditional inputs with natural language generation in a production-ready environment. Responsibilities Architect and pretrain encoder-decoder transformer models (e.g., BERT, GPT, T5) from scratch using structured, sequential datasets. Design pipelines that convert non-textual data—such as spatial, temporal, or sensor-based signals—into NLP-compatible formats. Build end-to-end systems that generate natural language descriptions, summaries, or narratives from raw structured inputs. Apply rigorous model evaluation strategies and optimize for downstream language generation tasks. Collaborate across functions with data engineers, domain specialists, and product teams to deliver production-ready solutions. Stay abreast of and incorporate the latest research in NLP, pretraining strategies, and model efficiency. Required Skills & Experience 3–4 years of hands-on experience in Natural Language Processing and deep learning . Strong expertise in Transformer-based architectures and foundational NLP theory. Proven experience in pretraining language models from scratch , not just fine-tuning existing ones. Advanced proficiency in Python , with experience in PyTorch , TensorFlow , and the Hugging Face Transformers & Datasets libraries . Familiarity with sequential, time-series, or geospatial data , and how to integrate such data into NLP pipelines. Experience with data preprocessing , including transforming unconventional inputs into model-compatible formats. Strong problem-solving, communication, and cross-functional collaboration skills. Bonus Points For Experience in multimodal modeling (e.g., combining text, time, and spatial inputs). Knowledge of MLOps , model optimization , and deployment workflows . Contributions to open-source NLP projects or publications in peer-reviewed ML/NLP conferences. Why Join Us? This is your opportunity to shape how machines interpret and articulate real-world, structured data through language. If you’re passionate about building from the ground up and redefining what’s possible with NLP, we want to hear from you.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Role: Data Analyst Experience: 5+ Years Location: Delhi NCR Notice: Immediate Joiners Only Job Description: Lead conversations with Senior Management, Category teams, UX, Design, Production etc to scope out requirements, define tests for A/B testing , create hypotheses, develop variations to test, build tests and thoroughly QA them. ▪ Generate insights based on trends observed in the data, tie back to the business initiatives and update business owners ▪ Analysing online user behaviour, conversion data and the customer journeys leading to optimize user experience. ▪ Strong analytical skills to investigate and understand the opportunities of where A/B testing may be required and analyze the success of tests that have been run. ▪ Experience working with cloud based and open-source technologies including Amazon Web Services and/or use of Google Big Query and Google Cloud Platform. ▪ Experience in working on a Data Visualization/BI/MI tool (Tableau, Power BI, Data Studio etc.), dashboard performance tuning to handle large data sets and deployment process in an enterprise setup ▪ Strong dashboard skills coupled with analytical thinking to help define business specific dashboards ▪ Evaluate tools and technologies to develop best in class analytic strategies ▪ Use statistical tools and techniques to identify and evaluate the relationships between data fields, define customer segments ▪ Good understanding of customer segmentation techniques and audience activations using downstream systems ▪ Strong understanding of tag management tools, variables and optimization tool setup ▪ Execute quantitative analysis that translates data into actionable insights. ▪ Manage and Interpret large amounts of complex data across functions (Web, App, CRM, Operations, etc.) and experience in building correlations, forecasts and attribution modelling. ▪ Ensure data integrity and recognize / resolve data inconsistencies in reports, analysis and analytics toolsets. ▪ Ability to work with cloud platforms for data analytics, reporting and statistical needs ▪ Be able to extract and manipulate complex data using queries from a CDP kind of a system. ▪ Develop and enhance automated reporting templates that communicate KPI, trends and deviations to stakeholders. ▪ Effective presentation and story boarding skills with exposure to executive level presentations. ▪ Understanding digital marketing landscape and ability to derive campaign analysis in terms of campaign performance and channel optimization ▪ Experience in conducting industry research and analyzing clients business performance at least in 2-3 scenarios ▪ Strong understanding about how to define and set benchmarks for various KPIs

Posted 5 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Location- Gr Noida Hyderabad ( No relocation) 24*7 rotational shifts 5 days work from office Only immediate joiners required Interview Mode : L1 Virtual , L2 Face to Face JD- Hands on experience working with Oracle database activities, SQL, OEM, Golden Gate Replication (CDC BDA), RAC/EXA Setup. Expertise in SQL Profiling/ Performance Tuning , Database Monitoring, Backup and Restore, Data guard, Grid control toolset, etc. Responsible for technical database, support of Infrastructure, Applications other components processes. Participate in Planning, Development of specifications, other supporting documentation processes. Knowledge of Finance/banking industry, Terminology, Data Data structures is add-on. Knowledge of SQL server as well as Oracle databases. Knowledge of Identity Framework. Experienced technical knowledge in specialty area with basic knowledge of complementary infrastructures. A fast learner with ability to dive into new products and technologies, develop subject matter expertise and drive projects to completion. A team player with good written and verbal communication skills that can mentor other members in the production support group. The candidates having experience Knowledge and experience in scripting language ksh, Perl, etc. would be preferred for this role. Understanding ITIL processes. Utilizing monitoring tools effectively.. This job is provided by Shine.com

Posted 5 days ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

#CMS is urgently #hiring for the position of Senior Security Analyst- #SOCLead . If you are an immediate joiner and interested in this opportunity, please share your resume at salma.saifi@cmsitservices.com . Experience - 4 Years+ in SOC Location - Ghaziabad Key Responsibilities: Continuously monitor security alerts, incidents, and health dashboards. Investigate security alerts and ensure closure by coordinating with the concerned teams. Analyze and report on bad reputation IPs; forward findings to the network team for appropriate blocking. Develop and customize reports, rules, and dashboards as per client requirements. Create and tune incident alert rules in the SIEM platform. Integrate various security devices and log sources into the SIEM (e.g., firewalls, routers, servers). Perform fine-tuning of security alerts to reduce false positives and improve detection accuracy. Monitor and manage SIEM storage components such as Archiver. Maintain connectivity checks of all RSA NetWitness components (Log Decoder, Concentrator, ESA, etc.). Backup logs from cold storage to virtual machines (VMs) as per retention policy. Ensure the integrity, availability, and confidentiality of event and log data. Provide end-to-end resolution for HPSM (HP Service Manager) tickets. . Knowledge of network security, IP reputation, and attack vectors. Familiar with HPSM or other ITSM tools for ticket lifecycle management. Tools & Technologies: RSA NetWitness SIEM HPSM or ITSM Tools Security Dashboards and Reporting Tools Cold Storage Backup Systems Network Threat Intelligence Platforms Thanks and regards Salma Saifi

Posted 5 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the role: He/She/They will be developing the detailed design structure, implementing the best practices and coding standards, leading a team of developers for successful delivery of the project. You will be working on design, architecture and hands-on coding. Responsibilities Design and implement software of embedded/IOT devices and systems from requirements to production and commercial deployment. Design, develop, code, test and debug system software. Review code and design. Analyze and enhance efficiency, stability and scalability of system resources. Integrate and validate new product designs. Support software QA and optimize I/O performance. Provide post production support. Interface with hardware design and development. Assess third party and open source software Requirements: Proven working experience in software engineering Experience in hands-on development and troubleshooting on embedded targets Solid programming experience in C or C++ Proven experience in embedded systems design with preemptive, multitasking real-time operating systems Familiarity with software configuration management tools, defect tracking tools, and peer review Excellent knowledge of OS coding techniques, IP protocols, interfaces and hardware subsystems Adequate knowledge of reading schematics and data sheets for components Strong documentation and writing skills An entrepreneurial spirit combined with strong program and product management skills. Proven success in building, motivating and retaining teams. Excellent written and verbal communication skills with the ability to present complex plans and designs. Excellent judgment, organizational, and problem-solving skills. Excellent design and architecture knowledge. Preferred Qualification : Bachelor's/Master's Degree in Computer Science or equivalent Skills that will help you succeed in this role: Tech Stack: Lang: C/C++, DB: SQLite Protocols: MQTT, TCP, HTTP etc, Backend : AWS IOT Hub. Strong experience in scaling, performance tuning & optimization at the client layer. Hands-on leader, and problem solver with a passion for excellence. Why join us: Because you get an opportunity to make a difference, and have a great time doing that. You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve. You should work with us if you think seriously about what technology can do for people. We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. Learn more about the exciting work we do in Tech by reading our Engineering blogs Compensation : If you are the right fit, we believe in creating wealth for you. With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It is your opportunity to be a part of the story!

Posted 5 days ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

At SmartBear, we believe building great software starts with quality—and we’re helping our customers make that happen every day. Our solution hubs—SmartBear API Hub, SmartBear Insight Hub, and SmartBear Test Hub, featuring HaloAI, bring visibility and automation to software development, making it easier for teams to deliver high-quality software faster. SmartBear is trusted by over 16 million developers, testers, and software engineers at 32,000+ organizations – including innovators like Adobe, JetBlue, FedEx, and Microsoft. Associate Software Engineer – JAVA QMetry Test Management for Jira Solve challenging business problems and build highly scalable applications Design, document and implement new systems in Java 17/21 Develop backend services and REST APIs using Java, Spring Boot, and JSON Product intro: QMetry is undergoing a transformation to better align our products to the end users’ requirements while maintaining our market leading position and strong brand reputation across the Test Management Vertical. Go to our product page if you want to know more about QMetry Test Management for Jira | Smartbear. You can even have a free trial to check it out 😊 About the role: As an Associate Software Engineer, you will be integral part of this transformation and will be solving challenging business problems and build highly scalable and available applications that provide excellent user experience. Reporting into the Lead Engineer you will be required to develop solutions using available tools and technologies and assist the engineering team in problem resolution by hands-on participation, effectively communicate status, issues, and risks in a precise and timely manner. You will write code as per product requirements and create new products, create automated tests, contribute in system testing, follow agile mode of development. You will interact with both business and technical stakeholders to deliver high quality products and services that meet business requirements and expectations while applying the latest available tools and technology. Develop scalable real-time low-latency data egress/ingress solutions in an agile delivery method, create automated tests, contribute in system testing, follow agile mode of development. We are looking for someone who can design, document, and implement new systems, as well as enhancements and modifications to existing software with code that complies with design specifications and meets security and Java best practices. Have 2-4 years of experience with hands on experience working in Java 17 platform or higher and hold a Bachelor’s Degree in Computer Science, Computer Engineering or related technical field required. API - driven development - Experience working with remote data via REST and JSON and in delivering high value projects in Agile (SCRUM) methodology using preferably JIRA tool. Good Understanding of OOPs, Java, Spring Framework and JPA. Experience with Applications Performance Tuning, Scaling, Security, Resiliency Best Practices. Experience with Relational databases such as MySQL, PostgreSQL, MSSQL, Oracle. Experience with AWS EC2, RDS, S3, Redis, Docker, GitHub, SSDLC, Agile methodologies and development experience in a SCRUM environment. Experience with Atlassian suite of Products and the related ecosystem of Plugins is a plus. Why you should join the SmartBear crew: You can grow your career at every level. We invest in your success as well as the spaces where our teams come together to work, collaborate, and have fun. We love celebrating our SmartBears; we even encourage our crew to take their birthdays off. We are guided by a People and Culture organization - an important distinction for us. We think about our team holistically – the whole person. We celebrate our differences in experiences, viewpoints, and identities because we know it leads to better outcomes. Did you know: Our main goal at SmartBear is to make our technology-driven world a better place. SmartBear is committed to ethical corporate practices and social responsibility, promoting good in all the communities we serve. SmartBear is headquartered in Somerville, MA with offices across the world including Galway Ireland, Bath, UK, Wroclaw, Poland, Ahmedabad and Bangalore India. We’ve won major industry (product and company) awards including B2B Innovators Award, Content Marketing Association, IntellyX Digital Innovator and Built-in Best Places to Work. SmartBear is an equal employment opportunity employer and encourages success based on our individual merits and abilities without regard to race, color, religion, gender, national origin, ancestry, mental or physical disability, marital status, military or veteran status, citizenship status, age, sexual orientation, gender identity or expression, genetic information, medical condition, sex, sex stereotyping, pregnancy (which includes pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), or any other legally protected status.

Posted 5 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a strategic, results-oriented, and hands-on VP of AI and Automation to lead our intelligent automation initiatives. Drive strategy to leverage AI for transformative business process automation. You will lead a world-class team to design, build, and deploy sophisticated AI and automation solutions that drive efficiency, reduce costs, and create a significant competitive advantage. What You'll Do Lead & Inspire: Manage, mentor, and grow a high-performing team of AI and automation engineers, fostering a culture of innovation, collaboration, and execution. Architect the Future: Design and oversee the implementation of our core AI and automation platforms, including sophisticated Agentic Architectures that enable complex, autonomous workflows. Drive LLM Strategy: Lead the strategic selection, tuning, optimization, and testing of Large Language Models (LLMs). This includes establishing best practices for advanced Prompt Engineering and building robust Retrieval-Augmented Generation (RAG) pipelines to ensure our models are accurate and relevant. Operational Excellence: Champion a robust MLOps culture, implementing best-in-class processes for model deployment, monitoring, and lifecycle management. Secure Our AI: Define and enforce state-of-the-art LLM Security protocols to protect against adversarial attacks, ensure data privacy, and maintain model integrity. Scale with the Cloud: Own the Cloud Architecture strategy for all AI and automation workloads, ensuring our infrastructure is scalable, cost-effective, and resilient on platforms like AWS, GCP, or Azure. What You'll Need 10+ years of experience in software engineering, with at least 5+ years in a hands-on senior leadership role managing AI, ML, or automation-focused teams. Deep, hands-on expertise in the modern AI stack, including LLMs , RAG systems , and vector databases. Demonstrated experience designing and building complex systems using Agentic Architecture and multi-agent frameworks. Expert-level understanding of MLOps principles and tools (e.g., Kubeflow, MLflow, Seldon Core). Strong background in designing and managing scalable Cloud Architecture for AI applications. In-depth knowledge of LLM Security best practices, including red teaming, output validation, and mitigating common vulnerabilities.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies