Jobs
Interviews

30535 Scalability Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Mumbai, Maharashtra, India; Gurugram, Haryana, India . Minimum qualifications: Bachelor's degree or equivalent practical experience. 10 years of experience with cloud native architecture in a customer-facing or support role. Experience with leadership, such as people management, team lead, mentorship, or coaching. Ability to travel up to 25% of the time as needed. Preferred qualifications: Experience with data ecosystem, including Open source, architecting and developing distributed systems, along with experience in data processing, business analytics and visualization, Data Science and AI. Experience as a Pre-Sales Manager or a people manager in a technical customer-facing role within a professional services or Sales Engineering team. Experience managing a team through business processes, operations and career development, including account mapping, quota setting, quarterly/annual performance management, and managing sensitive information. Experience presenting to both technical stakeholders and executives, leading conversations that drive business opportunities. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Customer Engineering (CE) Manager, you lead and deploy a team of subject-matter-experts responsible for working alongside our customers to provide trusted technical and solution advice to accelerate workload migration and remove technical impediments. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Lead a team of Customer Engineers and build a growth culture. Focus on talent strategy and skills development to deliver on successful outcomes for our customers and accelerate business goals. Build partnerships with customers. Provide leadership related to convergence of Data, Analytics and AI, as well as industry trends. Partner with Google Cloud Sales leadership to define technical go-to-market strategies and execution plan for the team's business. Balance technical leadership with operational excellence, lead workload and opportunity review meetings and provide insight into how to achieve a technical agreement and migration strategy, working directly with our customers, partners and prospects. Work cross-functionally across Google, our partners, and the team to resolve technical roadblocks including capacity needs, constraints and product tests affecting customer satisfaction. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Minimum qualifications: Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience. 10 years of experience in managing critical incidents. 3 years of experience in providing technical or infrastructure support. Experience in an operational and leadership role in a cloud services delivery environment. Preferred qualifications: Certification in ITIL v4 or Project Management. Experience in supporting and managing technical environments within a multi-tenant cloud environment. Experience with industry tools (e.g., SalesForce, Google Workspace). Knowledge of leadership in engineering, operations, or executive support roles. Ability to influence momentum of incident response across multiple teams. Ability to work in a changing environment with prioritization and time management. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Staff Critical Incident Manager, you will execute existing, critical incident response operations. You will manage customer impacting incidents and executive level customer escalations. You will also collaborate and partner with the entire Google Cloud organization to drive resolution. You will partner and collaborate with Infrastructure, Engineering, Technical Support, Product Owners, Customer Success and Business Leadership to ensure delivery of a support experience for customers. You will ensure transparent communication that drives internal/external customer satisfaction. Google Cloud accelerates organizations’ ability to digitally transform their business with the best infrastructure, platform, industry solutions and expertise. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology – all on the cleanest cloud in the industry. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Lead incident response efforts in a fast-paced, 24/7 on-call environment, ensuring clear and transparent communications with internal teams and customers. Coordinate and manage escalations from executives or key customers, driving cross-functional collaboration to deliver fast and effective resolutions. Act as an Incident Response thought leader, contributing to process improvements and implementing automation for continuous optimization of incident management workflows. Facilitate post-incident reviews, using insights to recommend enhancements to incident response strategies and ensure alignment across all teams. Lead complex projects to address ambiguity in operations, overcoming obstacles to deliver impact outcomes for both customers and Google. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 day ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Key Responsibilities: ● Utilize Python to build and sustain software applications with a strong focus on AI and machine learning. ● Ensure performance, usability, and scalability in AI applications by leveraging advanced Python techniques. ● Identify and resolve issues to maintain low latency and high availability in AI systems. ● Participate in and conduct code reviews, providing constructive feedback on software design and architecture. ● Work with cross-functional teams to define project requirements and scope, ensuring alignment with AI objectives. ● Apply your expertise in AI libraries and frameworks like langchain, openai, llamaIndex, Pandas, Numpy, or similar tools. ● Work with Large Language Models such as GPT-4, Llama and Vector databases like Pinecone, chromadb, FAISS. ● Integrate Python applications with databases, ensuring efficient data storage and retrieval. ● Utilize Amazon Web Services (AWS) for deploying and managing cloud-based AI applications. ● Utilize robust analytical and problem-solving abilities to tackle complex AI challenges. ● Exhibit excellent communication and teamwork skills to collaborate effectively within the team and with stakeholders Key Requirements: ● Degree in Computer Science, Engineering, or a related field. ● Minimum 6 years of relevant experience is a must. ● Proven experience as a Python Developer, with a focus on AI and machine learning projects. ● Strong knowledge of Django, Flask, or similar Python frameworks, with an emphasis on AI integration. ● Proficient in integrating Python applications with databases. ● Experience with Amazon Web Services (AWS) for cloud-based solutions. ● Familiarity with large language model (LLM) frameworks for AI development. ● Familiarity in concepts such as data chunking, embedding, and various similarity search approaches like cosine similarity. Why Join Us? ● Be part of a team that is working on cutting-edge technology products in the AI and SaaS space. ● Experience high growth potential within a pioneering company. ● Engage in a challenging environment where you solve interesting problems every day. ● Work on innovative products that have a real impact on enterprise customers. ● Collaborate with a talented and diverse team of experts in the field. ● Enjoy a flexible work environment with ample opportunities for growth and development. ● Receive a competitive compensation and benefits package.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Ganganagar, Rajasthan, India

On-site

32840BR Bangalore - Campus Job Description Key Areas of Responsibility Participate in design discussions about the technical implementation and consider the tradeoffs to support business value, scalability and delivery timeline. Implement, and optimize machine learning algorithms and models to solve specific business problems or improve existing processes. Collect, clean, and preprocess large datasets for training and evaluation. Perform exploratory data analysis to gain insights and inform feature engineering. Train machine learning models using various techniques such as supervised, unsupervised, and reinforcement learning. Evaluate model performance using appropriate metrics and iterate on model design as needed. Integrate models and GenAI services with existing software infrastructure and workflows. Optimize the performance of machine learning models and algorithms, considering factors such as speed, accuracy, memory usage, and scalability. Collaborate with cross-functional teams including Model Discovery Team, DxP, IDP, Horizontal/Vertical Solutions Team(s), Infrastructure Teams etc., Stay updated with the latest advancements in machine learning research and technologies. Experiment with new algorithms, frameworks, and tools to improve model performance and enhance capabilities. Document code, algorithms, and methodologies to facilitate knowledge sharing and maintain codebase integrity. Contribute to peer/code review, internal knowledge repositories and participate in knowledge-sharing sessions to disseminate best practices and lessons learned. Skill required: Candidate with 5+ years of relevant experience in Machine Learning, AI, Python as mandatory skills Strong communication, collaboration and problem solving skills with a track record of delivering production grade systems in a team environment Motivated individual who learns quickly, has pride in building a new product and can engage others to accelerate technical solutions 2+ years of experience in AI & ML, Python and working with agile scrum methodologies. Strong DL for cases like image/audio/text classification. Strong NLP (LLM) knowledge on entity extraction. Multi-lingual LM and multi-modal LM experience are extremely preferred. Hands on experience on GenAI (LLM), and experts on prompt engineering. Machine learning pipeline knowledge and hands-on experience on Kubeflow or MLflow frameworks. This is highly desirable but not required. Experience in MLOps, Cloud, Python, Kubernetes, Workflows, MongoDB, PostGresDB Experience with relational and non-relational databases (Postgres, MongoDB, GraphDB, VectorDB) 1+ year of experience with Google Cloud Platform, AWS or Azure Experience with agile tools such as Atlassian JIRA, Rally, TFS or Version One Experience of CI/CD tools (like Gitlab, Jenkins, git), Docker, Linux Shell scripting. Qualifications BE Range of Year Experience-Min Year 4 Range of Year Experience-Max Year 6

Posted 1 day ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Performance Engineer (SDET-2/3) Experience: 4 – 6 years About the Role: We are looking for a Performance Engineer to join our engineering team and ensure the scalability, reliability, and performance of our applications and infrastructure. You will play a key role in benchmarking, profiling, and identifying bottlenecks across our systems. Your expertise will directly contribute to delivering smooth and responsive experiences to our users. ️ What You’ll Do: ● Own the performance strategy—build and run load tests using tools like Locust, k6, or Gatling. ● Benchmark APIs, profile backend services, and optimize system-level bottlenecks. ● Dive deep into logs and metrics (ELK, Prometheus, Grafana) to find performance issues. ● Work closely with engineering teams to design for scalability from day one. ● Tune databases (MySQL, Redis, MongoDB), queues (Kafka/RabbitMQ), and caching layers. ● Set up performance testing in CI pipelines so we never miss regressions. ● Help define and monitor SLAs, SLOs, and alerting thresholds. ✅ You Should Have: ● 3–6 years of experience in performance testing and optimization. ● Strong grasp of HTTP, REST APIs, and backend performance fundamentals. ● Hands-on with any of these languages— Python, Go, or Java. ● Experience with APM tools like New Relic, AppDynamics, or Datadog. ● Familiarity with AWS / GCP, Kubernetes, and cloud-native observability. 💡 Nice to Have: ● Worked on a high-scale product (e.g., fintech, social, mobility, or ecommerce). ● Experience with asynchronous systems, circuit breakers, and rate-limiting. ● Experience debugging weird slowdowns and memory/cpu spikes in prod.

Posted 1 day ago

Apply

3.0 years

10 - 18 Lacs

Bengaluru, Karnataka, India

On-site

Are you passionate about building scalable backend systems, working with cutting-edge technologies, and solving real-world challenges through clean and efficient code? We are looking for a seasoned Software Development Engineer (SDE 1) with 3+ years of experience to join our dynamic team and drive impactful backend development projects. Key Responsibilities Lead backend and full-stack development initiatives using Java (mandatory) and optionally Python Architect and implement microservices, apply SOLID and OOPS principles, and design for performance and scalability Build and maintain robust APIs (REST, GraphQL) using tools like Swagger and Postman Work with Spring Boot, Hibernate, and ensure clean, modular code Manage both SQL (MySQL) and NoSQL (MongoDB) databases with strong ACID compliance Enforce security standards (SSL, TLS, cookies, headers) Develop and deploy on AWS (EC2, Lambda, ECS Fargate, S3, DynamoDB, SQS/SNS) Ensure DevOps best practices: Git, GitHub, CI/CD pipelines, code reviews, JUnit/Mockito testing Optimize performance with caching (L1/L2), throttling, and load balancing Must-Have Expertise In Strong grasp of Data Structures & Algorithms (DSA) Solid foundation in Object-Oriented Programming (OOPS) Ideal Candidate 3+ years in software development with a strong backend focus Proactive problem solver and excellent communicator Experience mentoring junior developers or leading tech discussions Location: Bengaluru Skills: rest apis,postman,python,hibernate,data structures,mysql,java,data structures & algorithms (dsa),oop principles,microservices,solid principles,core java,mongodb,security standards (ssl, tls),nosql,load balancing,spring boot,aws (ec2, lambda, ecs fargate, s3, dynamodb, sqs/sns),throttling,devops (git, github, ci/cd),mockito,graphql,junit,caching,sql,swagger,aws

Posted 1 day ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Who We Are: Saks Global is a combination of world-class luxury retailers, including Neiman Marcus, Bergdorf Goodman, Saks Fifth Avenue and Saks OFF 5TH, as well as a portfolio of prime U.S. real estate holdings and investments. Saks Global is deeply committed to helping luxury consumers discover the most sought-after established and emerging brands from around the world. Powered by data-driven technology and centered on the customer, Saks Global is on a mission to redefine the luxury shopping experience through highly personalized service, with greater opportunities for product discovery across all channels. Role Summary: Assistant Manager, Buying Operations is responsible for overseeing the Sample Management & item setup process. The lead is responsible for driving process improvement & efficiency metrics development & implementation. They are also responsible for achieving topside sales plans, conversion goals, usability performance targets & corporate objectives. They will oversee teams focused on ensuring timely production of merchandise with accountability for complete & accurate turn in processes, product information & assortments. They will also drive ongoing efficiency & quality improvements. Key Qualifications: Experience in the field of Item Setup in a multi-banner E-commerce retail environment Minimum 7+ Years of experience in which at least 2+ years of experience in people management Monitor volumes & prioritize team’s workload accordingly to meet timelines Create & develop solutions to streamline operations, improve consistency & increase efficiency of the team Develop training materials & product guides as needed Understand the multi-channel/banner aspect of the business & help manage that with Direct Reports Participates in long-term planning & resource allocation discussions - Manages forecasting & freelancer scheduling /budget Proficiency with merchandising systems (e.g., PIM, RFS) Technical aptitude with web-based tools & proficiency with Microsoft Office Suite Action & detail oriented, organized with ability to manage teams to execute within deadlines Demonstrate strong resource workload & capacity management skills & proven ability to manage multiple resources, priorities & a large volume of business Demonstrate ability to analyze & react to quality & performance metrics to drive quality & efficiencies within team Ability to select & develop a team of future leaders Exhibit ability to perform well, problem solve & brainstorm in a collaborative environment & inspire a strong sense of camaraderie, accountability, & high performance across teams Demonstrate sound business judgment, proven ability to influence others & strong decision making skills Must have a minimum of 5 years of experience in e-commerce businesses Role Description: Develop strategies to scale, monitor & streamline the Vendor provided assets acquisition & product turn-in processes to ensure a consistent & even flow of products-to-turn-in across all categories / banners on a daily basis. Proactively work to improve the turn-in process through conducting regular strategic reviews of turn-in metrics & work with cross-functional partners to identify & implement opportunities to improve the accuracy, efficiency, & scalability of the turn-in process. Interface with Buying Organizations to prioritize item creation & PO entry to drive full price sales by providing clarity on merchant PO inputs through reporting. Manage inventory control & transfers to/from vendors & DCs. Oversee & drive the item set up process & improvements focusing on accuracy & consistent customer experience. Ensure timely live dates of products. Oversee team quality metrics & define ways to improve including but not limited to reducing NOS, improving time to site, increasing compliance & improving team quality metrics Provide thought leadership on process efficiency initiatives including daily publication, PIM, sample workflow management & cross-functional training. Drive & ensure continuous process efficiency & performance improvements across Sample Management teams. Apply best practices across categories / banners. Continue to review organizational structure to ensure accurate headcount to facilitate the continuing growth of the business Streamlining Sample Management workflow processes & leveraging best practices across teams, locations, banners Liaising with the buying offices on Lifecycle related priorities/issues Partnering with Asset Protection & DC teams for studio inventory management & aligning on all policies & procedures Lead, coach, and develop a team, ensuring high levels of engagement, performance, and collaboration. Set clear goals and performance expectations in alignment with business objectives. Conduct regular one-on-ones, performance reviews, and feedback sessions to support employee development. Promote a diverse, inclusive, and respectful work environment. Support workforce planning, recruitment, and onboarding efforts in collaboration with HR. Drive employee engagement through recognition, team-building, and clear communication. Your Life and Career at Saks: Exposure to rewarding career advancement opportunities A culture that promotes a healthy, fulfilling work/life balance Benefits package for all eligible full-time employees (including medical, vision and dental). Thank you for your interest in Saks. We look forward to reviewing your application. Saks provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Saks complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Saks welcomes all applicants for this position. Should you be individually selected to participate in an assessment or selection process, accommodations are available upon request in relation to the materials or processes to be used.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We’re Hiring | ClickHouse Administrator Location: Hyderabad Experience: 5+ Years Role Overview: We’re looking for an experienced ClickHouse Administrator to manage the complete lifecycle of on-prem ClickHouse clusters — from architecture and deployment to security, monitoring, and disaster recovery. You’ll work closely with cross-functional teams to ensure high availability, scalability, and compliance. Key Responsibilities Design and implement on-prem ClickHouse deployments with high availability & scalability Install, configure, and upgrade ClickHouse servers & client tools Define and enforce security policies (TLS, authentication, RBAC, auditing) Monitor cluster health, plan capacity, and fine-tune performance Develop backup, restore, and disaster recovery strategies Collaborate with networking, storage, and security teams for compliance requirements Required Skills & Qualifications Bachelor’s degree in Computer Science, Information Systems, or equivalent 5+ years of database administration experience (1+ year in ClickHouse) Strong Linux administration skills (RHEL/CentOS, Ubuntu) Proficiency in Bash and/or Python scripting Knowledge of networking, storage (SAN/NAS), and virtualization Familiarity with security frameworks (LDAP, Kerberos, TLS) Experience with Kubernetes/Docker Bonus: ClickHouse Certified Developer certification Nice-to-Have: Experience with monitoring stacks (Prometheus, Grafana) Hands-on with configuration management (Ansible, Chef, Puppet)

Posted 1 day ago

Apply

4.0 years

0 Lacs

India

Remote

Position: Azure Data Engineer Location: Remote Experience: 4+ Years Notice Period: Immediate Joiners Only Overview: We are looking for an experienced Azure Data Engineer to work in a hybrid Developer + Support role. The position involves enhancing and supporting existing Data & Analytics solutions using Azure technologies, ensuring performance, scalability, and reliability. Key Skills (Must-Have): Azure Databricks PySpark Azure Synapse Analytics Responsibilities: Design, develop, and maintain data pipelines using Azure Data Factory, Databricks, and Synapse. Perform data cleansing, transformation, and enrichment using PySpark. Handle incident classification, root cause analysis, and resolution. Conduct code reviews and fix recurring/critical bugs. Coordinate with SMEs and stakeholders for issue resolution. Collaborate with teams to deliver robust and scalable data solutions. Contribute to CI/CD processes via Azure DevOps. Requirements: 4–6 years of experience in Azure Data Engineering. Strong skills in Databricks, Synapse, ADLS Gen2, Python, PySpark, and SQL. Knowledge of file formats (JSON, Parquet) and databases (Teradata, Snowflake preferred). Experience in Azure DevOps and CI/CD pipelines. Familiarity with ServiceNow for incident/change management. Strong communication, problem-solving, and time management skills. Nice-to-Have: Power BI experience DP-203 certification

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

On-site

The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code. We are looking for a Full Stack-React.js Developer. Apply with an updated CV at sony.pathak@aptita.com Lead Engineer- React Notice period: Immediate to 30 days Experience range: 8 years Must have exp: Reactjs, node js Responsibilities Education and experience: ○ Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. ○ Minimum of 8 years of professional experience in full-stack development. ● Technical Requirements: ○ Proficiency in JavaScript, including ES6 and beyond, asynchronous programming, closures, and prototypal inheritance. ○ Expertise in modern front-end frameworks/libraries (React, Vue.js). ○ Strong understanding of HTML5, CSS3, and pre-processing platforms like SASS or LESS. ○ Experience with responsive and adaptive design principles. ○ Knowledge of front-end build tools like Webpack, Babel, and npm/yarn. ○ Proficiency in Node.js and frameworks like Express.js, Koa, or NestJS. ○ Experience with RESTful API design and development. ○ Experience with Serverless.(Lambda, CloudFunctions) ○ Experience with GraphQL. ○ Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Redis). ○ Experience with caching & search frameworks. (Redis, ElasticSearch) ○ Proficiency in database schema design and optimization. ○ Experience with containerization tools (Docker, Kubernetes). ○ Experience with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). ○ Knowledge of cloud platforms (AWS, Azure, Google Cloud). ○ Proficiency in testing frameworks and libraries (Jest, vitest, Cypress, Storybook). ○ Strong debugging skills using tools like Chrome DevTools, Node.js debugger. ○ Expertise in using Git and platforms like GitHub, GitLab, or Bitbucket. ○ Understanding of web security best practices (OWASP). ○ Experience with authentication and authorization mechanisms (OAuth, JWT). ○ System Security, Scalability, System Performance experience Qualifications Bachelor's degree or equivalent experience in Computer Science or related field Development experience with programming languages SQL database or relational database skills

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

Job Summary We are seeking a highly skilled S/4 HANA Embedded Analytics Specialist to design, develop, and enhance analytics capabilities within the SAP S/4 HANA environment. The ideal candidate will have expertise in leveraging S/4 HANA's Embedded Analytics tools, Core Data Services (CDS), and Embedded BW (Business Warehouse) to provide actionable insights, optimize business processes, and drive data-driven decision-making. Key Responsibilities 1. Design, build, and deploy reports, dashboards, and analytical applications using SAP S/4 HANA Embedded Analytics. Develop Core Data Services (CDS) views to expose business data for analytics and reporting. 2. Leverage SAP Embedded BW for advanced data modeling, extraction, and transformation tasks. 3. Develop BW objects such as InfoObjects, CompositeProviders, and Open ODS views in the S/4 HANA Embedded BW environment. 4. Extract and transform data for reporting and analytics using Embedded BW capabilities. 5. Optimize data flows and queries for performance and scalability in the Embedded BW environment. 6. Optimize CDS views, Embedded BW objects, and analytics applications for performance and usability. Ensure data accuracy, integrity, and security across all analytics solutions. 7. Integrate Embedded Analytics and Embedded BW with external tools such as SAP Analytics Cloud (SAC) or third-party BI tools. 8. Design intuitive dashboards and KPIs for end-users using data from Embedded BW and CDS views. Must Have : Excellent Knowledge and hands-on skills of SAP S/4 Embedded Analytics (CDS Views) and ABAP, AMDP Working knowledge on creation and maintenance of OData services, Analytical list page Nice to Have : Good proficiency in SAP BW on HANA , BW/4HANA and SAP Analytics Cloud

Posted 1 day ago

Apply

9.0 - 12.0 years

0 Lacs

India

On-site

Job Summary We are looking for a highly skilled Technical Architect with expertise in AWS, Generative AI, AI/ML, and scalable production-level architectures. The ideal candidate should have experience handling multiple clients, leading technical teams, and designing end-to-end cloud-based AI solutions with an overall experience of 9-12 years. This role involves architecting AI/ML/GenAI-driven applications, ensuring best practices in cloud deployment, security, and scalability while collaborating with cross-functional teams. Key Responsibilities Technical Leadership & Architecture Design and implement scalable, secure, and high-performance architectures on AWS for AI/ML applications. Architect multi-tenant, enterprise-grade AI/ML solutions using AWS services like SageMaker, Bedrock, Lambda, API Gateway, DynamoDB, ECS, S3, OpenSearch, and Step Functions. Lead full lifecycle development of AI/ML/GenAI solutions—from PoC to production—ensuring reliability and performance. Define and implement best practices for MLOps, DataOps, and DevOps on AWS. AI/ML & Generative AI Expertise Design Conversational AI, RAG (Retrieval-Augmented Generation), and Generative AI architectures using models like Claude (Anthropic), Mistral, Llama, and Titan. Optimize LLM inference pipelines, embeddings, vector search, and hybrid retrieval strategies for AI-based applications. Drive ML model training, deployment, and monitoring using AWS SageMaker and AI/ML pipelines. Cloud & Infrastructure Management Architect event-driven, serverless, and microservices architectures for AI/ML applications. Ensure high availability, disaster recovery, and cost optimization in cloud deployments. Implement IAM, VPC, security best practices, and compliance. Team & Client Engagement Lead and mentor a team of ML engineers, Python Developer and Cloud Engineers. Collaborate with business stakeholders, product teams, and multiple clients to define requirements and deliver AI/ML/GenAI-driven solutions. Conduct technical workshops, training sessions, and knowledge-sharing initiatives. Multi-Client & Business Strategy Manage multiple client engagements, delivering AI/ML/GenAI solutions tailored to their business needs. Define AI/ML/GenAI roadmaps, proof-of-concept strategies, and go-to-market AI solutions. Stay updated on cutting-edge AI advancements and drive innovation in AI/ML offerings. Key Skills & Technologies Cloud & DevOps AWS Services: Bedrock, SageMaker, Lambda, API Gateway, DynamoDB, S3, ECS, Fargate, OpenSearch, RDS MLOps: SageMaker Pipelines, CI/CD (CodePipeline, GitHub Actions, Terraform, CDK) Security: IAM, VPC, CloudTrail, GuardDuty, KMS, Cognito AI/ML & GenAI LLMs & Generative AI: Bedrock (Claude, Mistral, Titan), OpenAI, Llama ML Frameworks: TensorFlow, PyTorch, LangChain, Hugging Face Vector DBs: OpenSearch, Pinecone, FAISS RAG Pipelines, Prompt Engineering, Fine-tuning Software Architecture & Scalability Serverless & Microservices Architecture API Design & GraphQL Event-Driven Systems (SNS, SQS, EventBridge, Step Functions) Performance Optimization & Auto Scali

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

Who We Are Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. Project Overview You will be working on enterprise-scale data-driven applications within a digital transformation initiative. This includes delivering cloud-based and on-premises solutions in areas such as data management, governance, analytics, and marketplace platforms. The role spans the design, development, and deployment of modern applications leveraging Microsoft .NET, Azure services, and enterprise data integration tools. Key Responsibilities Design, develop, and maintain full-stack .NET applications (front-end + back-end) supporting enterprise data and analytics solutions. Integrate applications with Azure Data Services, Databricks, Power BI, and other platforms. Implement APIs and services for data integration and interoperability across systems. Collaborate with data engineering teams to connect with data pipelines, warehouses, and governance platforms. Develop reusable, scalable components for UI and business logic. Implement secure, role-based access controls (RBAC) and compliance measures. Participate in Agile ceremonies and contribute to sprint planning, estimations, and delivery tracking. Conduct code reviews and ensure adherence to coding best practices and architectural guidelines. Collaborate with UX/UI designers to deliver intuitive, user-friendly interfaces. Optimize application performance, scalability, and maintainability. Integrate with tools such as Collibra, Microsoft Purview, Informatica MDM, and enterprise service bus (ESB) systems where required. Primary Skills & Experience (Must-have) .NET Full Stack Development – C#, ASP.NET Core, Entity Framework, LINQ, REST APIs, Web API. Front-End Development – Angular/React, JavaScript/TypeScript, HTML5, CSS3, Bootstrap/Tailwind. Azure Cloud Services – Azure Functions, Azure App Services, Azure SQL, Azure Data Factory, Azure Storage. Database Skills – SQL Server, stored procedures, query optimization. API Development & Integration – REST, GraphQL, JSON, XML. Agile/Scrum Delivery – Azure DevOps/JIRA for task management and CI/CD pipelines. Security & Compliance – RBAC, OAuth2, JWT authentication, secure coding practices. Nice to have Experience with Databricks, Power BI integration. Knowledge of Collibra or Microsoft Purview for metadata and governance. Understanding of Informatica PowerCenter/MDM and ESB integration. Familiarity with data governance, marketplace platforms, or business glossary tools. Skills: databricks,azure functions,rbac,c#,linq,api development & integration,agile/scrum delivery,web api,.net full stack development,html5,asp.net core,azure sql,azure devops,azure data factory,entity framework,security & compliance,angular,oauth2,bootstrap,azure app services,.net,azure,css3,react,azure storage,tailwind,javascript,typescript,jwt,sql server,rest apis

Posted 1 day ago

Apply

0.0 - 3.0 years

0 - 0 Lacs

Delhi, Delhi

On-site

Position Title: AI Automation Specialist (No-Code/Low-Code Expert) Location: E2, Plot No. 4, Jhandewalan Extension, Near Metro Station Gate No. 2, New Delhi – 110055 Employment Type: Full-time / Contract (based on experience) Company Overview BookLeaf Publishing is one of India’s most trusted self-publishing platforms, recognized for its innovation, scalability, and process automation. We’re committed to transforming the publishing landscape through intelligent systems that minimize manual intervention and maximize efficiency. Role Summary We are seeking a driven AI Automation Specialist to join our team. In this role, you will lead the design and deployment of intelligent, scalable systems using no-code/low-code platforms and AI-based integrations. Your core responsibility will be to eliminate repetitive tasks and enhance operational efficiency across departments. Key Responsibilities Develop and maintain AI-driven automation tools, bots, and workflows to streamline business processes Build intelligent chatbots that handle real-time data and offer contextual support Integrate multiple platforms and tools, including CRMs, Google Sheets, email systems, and social media APIs Consolidate and synchronize customer data across systems, reducing the need for manual data handling Prepare comprehensive documentation for workflows and enable smooth handover to non-technical stakeholders Required Experience: Must-Have: 1–3 years of hands-on experience building automation workflows using tools like Zapier, Make.com, Bubble, Airtable, or similar Practical understanding of API integrations, webhooks, and conditional logic Experience using AI tools (e.g., OpenAI, GPT, Dialogflow, LangChain) in real-world workflows Ability to design solutions independently from brief to execution Preferred But Not Mandatory: Prior experience in startups, SaaS, publishing, or customer support automation Light coding skills in JavaScript or Python (for custom steps in workflows) Familiarity with chatbot frameworks like Rasa, Botpress, or Dialogflow Experience integrating with platforms like Gmail, WhatsApp Business API, Meta (Instagram) Graph API, Google Sheets, CRMs, etc. Core Competencies and Technical Skills Proficiency in no-code/low-code automation platforms such as: Zapier, Make (Integromat) OpenAI (GPT-4, LangChain) Bubble, Airtable, Notion API Dialogflow, Botpress, Rasa Strong analytical and systems thinking, with the ability to creatively connect tools and workflows A product-oriented mindset, capable of identifying automation opportunities and implementing end-to-end solutions with minimal oversight. Why Join Us If you're passionate about building intelligent workflows, thrive on problem-solving, and want to shape the future of publishing through automation, BookLeaf Publishing offers a dynamic and forward-thinking environment for your growth. Job Type: Full-time Pay: ₹30,000.00 - ₹70,000.00 per month Benefits: Leave encashment Application Question(s): What's your age ? Are you willing to come to Jhandewalan as it's onsite profile? Do you have your own Laptop as we don't provide laptop to employees? Work Location: In person

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chandigarh, India

On-site

Zevpoint is a fast-growing EV charging solutions company building hardware and software products for India’s electric mobility future. We develop smart charging systems, web platforms, and e-commerce solutions to make EV charging seamless. Role Overview We are looking for an experienced Full Stack Developer to design, build, and maintain scalable web applications, Shopify integrations, and backend systems. The ideal candidate will be strong in both frontend and backend development with the ability to deliver end-to-end solutions. Key Responsibilities Develop and maintain applications using Python, Go (Golang), React.js, HTML/CSS . Build and customize Shopify themes, sections, and apps using Liquid and APIs. Implement backend services, APIs, and integrations with third-party systems. Work with databases ( MySQL, PostgreSQL, MongoDB ) for data storage and retrieval. Optimize applications for performance, scalability, and security. Collaborate with design and product teams to deliver intuitive user experiences. Requirements Bachelor’s degree in Computer Science, IT, or related field. 2–5 years of full stack development experience. Strong skills in React.js, Python, Go (Golang), HTML5, CSS3, JavaScript (ES6+), Liquid . Hands-on experience with REST APIs, Git, and cloud deployment. Solid understanding of responsive UI/UX principles. Problem-solving mindset and attention to detail. Good to Have Experience with IoT/EV charger integrations or OCPP protocol. Payment gateway integration experience (Razorpay, Stripe, etc.). Why Join Us? Competitive salary and performance bonuses. Opportunity to work on cutting-edge EV tech. A collaborative and innovation-driven work culture.

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Experience- 4-6Yrs Location ( Mumbai- Thane) Only Immediate joiners Key Responsibilities Database Engineering & Operations Own and manage critical components of the database infrastructure across production and non-production environments. Ensure performance, availability, scalability, and reliability of databases including PostgreSQL, MySQL, and MongoDB Drive implementation of best practices in schema design, indexing, query optimization, and database tuning. Take initiative in root cause analysis and resolution of complex performance and availability issues. Implement and maintain backup, recovery, and disaster recovery procedures; contribute to testing and continuous improvement of these systems. Ensure system health through robust monitoring, alerting, and observability using tools such as Prometheus, Grafana, and CloudWatch. Implement and improve automation for provisioning, scaling, maintenance, and monitoring tasks using scripting (e.g., Python, Bash). Database Security & Compliance Enforce database security best practices, including encryption at-rest and in-transit, IAM/RBAC, and audit logging. Support data governance and compliance efforts related to SOC 2, ISO 27001, or other regulatory standards. Collaborate with the security team on regular vulnerability assessments and hardening initiatives. DevOps & Collaboration Partner with DevOps and Engineering teams to integrate database operations into CI/CD pipelines using tools like Liquibase, Flyway, or custom scripting. Participate in infrastructure-as-code workflows (e.g., Terraform) for consistent and scalable DB provisioning and configuration. Proactively contribute to cross-functional planning, deployments, and system design sessions with engineering and product teams. Required Skills & Experience 4-6 years of production experience managing relational and NoSQL databases in cloud-native environments (AWS, GCP, or Azure). Proficiency in: Relational Databases: PostgreSQL and/or MySQL NoSQL Databases: MongoDB (exposure to Cassandra or DynamoDB is a plus) Deep hands-on experience in performance tuning, query optimization, and troubleshooting live systems. Strong scripting ability (e.g., Python, Bash) for automation of operational tasks. Experience in implementing monitoring and alerting for distributed systems using Grafana, Prometheus, or equivalent cloud-native tools. Understanding of security and compliance principles and how they apply to data systems. Ability to operate with autonomy while collaborating in fast-paced, cross-functional teams. Strong analytical, problem-solving, and communication skills. Nice to Have (Bonus) Experience with Infrastructure as Code tools (Terraform, Pulumi, etc.) for managing database infrastructure. Familiarity with Kafka, Airflow, or other data pipeline tools. Experience working in multi-region or multi-cloud environments with high availability requirements. Exposure to analytics databases (e.g., Druid, ClickHouse, BigQuery, Vertica Db) or search platforms like Elasticsearch. Participation in on-call rotations and contribution to incident response processes.

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: AI Developer Location: Pune (Hybrid), Budget: Up to 25 LPA Experience Required: 5 + years (Relevant AI Development Experience 2+) Work Mode: Onsite Job Summary: We are looking for an AI Developer with hands-on experience working with various AI tools, Proof of Concepts (POCs), and AI-driven projects. The ideal candidate should be proficient in designing, developing, and deploying AI solutions while collaborating with cross-functional teams. Strong problem-solving skills, a passion for AI innovation, and excellent communication abilities are essential. Key Responsibilities: AI Development & Implementation: Develop and implement AI/ML models and algorithms for real-world applications. Work on Proof of Concepts (POCs) and AI-based prototypes, translating ideas into deployable solutions. Optimize and fine-tune AI models for scalability, performance, and efficiency. Integrate AI technologies into existing business processes and software applications. AI Tools & Technology Stack: Hands-on experience with AI/ML tools, frameworks, and libraries such as TensorFlow, PyTorch, OpenAI, LangChain, and Hugging Face. Work with Large Language Models (LLMs), Generative AI, Natural Language Processing (NLP), and Computer Vision. Utilize cloud platforms such as AWS, Azure, or Google Cloud for AI model deployment and scaling. Project Execution & Collaboration: Collaborate with data scientists, software engineers, and business teams to define AI-driven solutions. Work closely with stakeholders to understand business problems and develop AI-based recommendations. Participate in code reviews, debugging, and troubleshooting to ensure AI applications are robust. Communication & Documentation: Communicate AI concepts, technical findings, and project progress to technical and non-technical stakeholders. Document AI models, processes, and best practices for knowledge sharing. Candidate Requirements: Minimum of two years of relevant experience in AI/ML development. Strong experience in building AI-driven applications, POCs, and automation solutions. Hands-on expertise with AI tools, frameworks, and cloud platforms. Proficiency in programming languages such as Python, R, or Java for AI/ML model implementation. Experience in LLMs, NLP, Deep Learning, and Generative AI is preferred. Strong problem-solving and analytical skills to drive AI initiatives. Excellent communication skills to convey technical concepts to diverse audiences. Ability to work in a hybrid model from Pune (on-site and remote). Why Join Us? Opportunity to work on cutting-edge AI technologies and projects. Collaborative environment with AI experts and business leaders. A chance to drive AI innovation in a dynamic and fast-paced setting. Competitive salary package up to 15 LPA. Hybrid work model based in Pune.

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role- Java Developer Location- Gurgaon Experience- 4-8 years Notice Period- Only Serving Candidates OR Immediate Key Responsibilities: ● Design and develop high-performance backend systems and RESTful APIs using Java and Spring Boot. ● Collaborate with cross-functional teams, including product managers, front-end developers, and QA engineers, to deliver seamless e-commerce solutions. ● Optimize application performance and scalability for high-traffic environments. ● Ensure code quality, maintainability, and adherence to best practices through code reviews and testing. ● Integrate with third-party services and payment gateways. ● Troubleshoot, debug, and resolve production issues promptly. ● Participate in the entire software development lifecycle, from concept to deployment. Key Skills and Qualifications: ● Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. ● 5-8 years of professional experience in Java development. ● Strong proficiency in Java (8 or higher) and Spring Boot framework. ● Experience with RESTful API design and development. ● Knowledge of Microservices Architecture and Cloud Services (e.g., AWS, Azure). ● Proficiency in databases such as MySQL, PostgreSQL, or MongoDB. ● Familiarity with tools like Maven, Gradle, and Git. ● Strong understanding of software development principles and design patterns. ● Experience in an e-commerce domain is required. ● Good communication skills and ability to work in a hybrid work environment.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Opportunity: athena’s developer platform is at the core of our business today, and its future. We’re looking for an experienced product leader to simplify the R&D employee experience, including developers, product managers, UX, analytics, operations, and more. Key innovation opportunities include leveraging artificial intelligence and integrating third party applications to design and transform our product delivery process. Criteria for a top candidate include a Chennai-based, dynamic, customer-oriented product leader to drive business outcomes, R&D (Product, UX, Engineering) service experience, and technical product management outcomes. Candidates for the role will be equally passionate about technology serving colleagues as customers, and realizing through the impact our tech strategy to our customer and user experience. This role will face-off extensively with product and engineering leadership and be held accountable for business outcomes of our tech stack – productivity, scalability, stability, and security, in partnership with several leaders around the business. Position Summary: The Director of Product Management will act as holistic owner of the R&D experience portfolio. Responsibilities will include building the product vision; engaging with the customer and user base to identify and represent the users of the product; making tradeoffs; and setting the priorities for R&D resource allocation. The Director will lead a team of Product Owners across 8 scrum teams and work closely with Engineering and UX counterparts, in addition to collaborating with Infrastructure, Information Security, the broader R&D organization, and vendors to ensure product success. This team will be charged with successfully understanding and responding to trends in the business and tech marketplace that impact future vision and roadmap of the product portfolio; coaching PMs to build actionable backlogs of user stories for scrum teams; and helping to drive overall growth and effectiveness across the entire product’s scope; partnering effectively with other product and engineering leaders to manage internal platform services enabling data products across the organization. Job Responsibilities: Business Outcomes of Technology Platform Identify and ensure that quantitative business outcomes of our tech platform are achieved (e.g. OKRs for cost, quality, productivity, developer NPS, etc.) Evaluate and respond to top product optimization opportunities from users and internal customers Collaborate with UX, architecture and engineering leadership on future state Enforce product boundaries that align with product and tech strategy (avoiding scope creep) Collaborate with release management on affecting SDLC process improvements Act as Voice of Customer to Help Prioritize Product Vision Communicate needs and opportunities for our internal tech services ; understand what is next and when to stay the course Evaluate and respond to top product optimization opportunities from internal users and developers Support resolution of product related escalations Provide partnership and leadership to internal product operations and developer communications that elevate internal change management and adoption Product Roadmap Creation / Collaboration Define new products and services by outlining approach to achieve desired outcomes within the boundaries of products, organized into a multi-year road map. Leverage insights and feedback from both developers and internal tech end users to improve product decisions. Maintain consistent and cooperative relationships with key internal leaders, peer group, and third party vendors and other contributors to ensure effective communication, timely decision making and appropriate escalation paths. Make critical business decisions including build vs. buy Act as accountable party for long-term portfolio and product strategy Team Leadership Meet scorecard goals by leading, inspiring, and developing a team of athenistas, modeling effective relationship building, ensuring high employee engagement, and managing staff career progression Lead strategy discussions with directs, help directs understand and own the overall strategic direction as well as executable plan to push out to broader teams Set and monitor annual performance goals & objectives, including responsibility for performance appraisals Typical Qualifications: Bachelor’s Degree or equivalent combination of education, training, and experience is required in a software or computer technology field; engineering, computer science or information systems related fields strongly preferred Experience in SaaS technology leadership roles highly preferred ; especially in heterogeneous tech platforms and DevSecOps Demonstration of technical product leadership understanding - ability to gut check technical feasibility and timelines and remove technical development barriers with engineering and architecture Demonstrated business leadership of business case development, cost savings management, outcome measurement, and financial acumen. Understanding of SaaS (Software as a Service) and PaaS (Platform as a Service) product management best practices, including experience in agile development environments Experience leading teams and managing direct reports International Travel required, predicting 5-10%; time may vary and will include office site visits, vendor meetings, and conferences.

Posted 1 day ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Profile: AWS Cloud Infrastructure Architect | Pune | Join by August! Experience: 8–12 Years Location: Pune (Hybrid/On-site as per role requirement) Notice Period: Immediate to August Joiners Are you passionate about architecting scalable and secure cloud infrastructure? Join our dynamic team as an AWS Cloud Infrastructure Architect, where you'll play a key role in designing and managing advanced cloud solutions that power mission-critical applications. Key Responsibilities: Architect and lead AWS-based infrastructure solutions with a focus on scalability, reliability, and security. Manage and optimize AWS RDS (SQL) and EC2 SQL environments. Design and execute SQL database migration strategies within AWS. Implement Infrastructure as Code (IaC) using Terraform. Handle configuration management using Chef. Develop and maintain automation scripts in Ruby for deployments and configurations. Collaborate with DevOps, Security, and Development teams to define and deliver infrastructure solutions. Proactively monitor performance and troubleshoot issues across cloud and database layers. Ensure adherence to security best practices and governance policies. Must-Have Skills: AWS Cloud Services – EC2, RDS, IAM, VPC, CloudWatch, etc. Strong hands-on experience in managing AWS infrastructure. Deep expertise in SQL RDS & EC2 SQL. Proven experience in database migrations within AWS. Advanced knowledge of Terraform for IaC. Proficiency in Chef for configuration management. Strong Ruby scripting and automation experience. Excellent problem-solving and analytical skills.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About the Role We're seeking an experienced Senior Systems Integrator Lead to spearhead the integration of our cutting-edge LLM solutions with diverse enterprise systems. This is a technical leadership role where you'll be hands-on in architecting, building, and deploying complex integration solutions while providing guidance and mentorship to a team of engineers. You'll be at the forefront of connecting disparate systems, orchestrating seamless LLM integrations, and establishing best practices for AI-driven system architecture. The ideal candidate combines deep technical expertise in systems integration with proven leadership capabilities and extensive experience in LLM/Generative AI implementations. Key Responsibilities Technical Leadership & Team Guidance Lead Integration Architecture: Design and oversee complex, multi-system integration strategies that seamlessly connect LLM solutions with existing enterprise infrastructure Team Technical Guidance: Mentor and guide development teams on integration best practices, code architecture patterns, and LLM implementation strategies Hands-on Development: Remain technically hands-on, writing code, conducting code reviews, and troubleshooting complex integration challenges Standards & Best Practices: Establish and enforce integration standards, development workflows, and quality assurance processes LLM & AI Integration Expertise Advanced LLM Integration: Design and implement sophisticated integration patterns for various LLM providers (OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, etc.) AI Pipeline Architecture: Build robust, scalable pipelines for prompt engineering, response processing, and model orchestration Performance Optimization: Optimize LLM integration performance including token management, caching strategies, and response time optimization Multi-modal AI Integration: Integrate text, image, and other AI modalities into existing business workflows Systems Integration & Architecture Enterprise Integration Patterns: Implement complex integration solutions using APIs, message queues, ETL/ELT pipelines, and event-driven architectures Microservices Architecture: Design and maintain microservices-based integration layers with proper service mesh, API gateway, and monitoring implementations Cloud-Native Solutions: Architect cloud-native integration solutions leveraging containers, serverless functions, and managed services Data Flow Management: Ensure secure, efficient data flow between systems while maintaining data integrity and compliance requirements Full-Stack Development & UI Integration React.js Applications: Build sophisticated front-end applications using React.js that interface with LLM backends and integrated enterprise systems API Development: Design and implement RESTful and GraphQL APIs that serve as integration points between systems Real-time Features: Implement real-time capabilities for AI interactions using WebSockets, Server-Sent Events, or similar technologies Collaboration & Communication Cross-functional Leadership: Work with product managers, data scientists, DevOps teams, and business stakeholders to translate requirements into technical solutions Technical Documentation: Create comprehensive architecture documentation, integration guides, and system design specifications Knowledge Sharing: Conduct technical sessions, workshops, and knowledge transfer meetings with team members and stakeholders Key Experiences Experience & Leadership 6-8+ years of systems integration experience with 2+ years in technical leadership roles Proven team leadership experience including mentoring junior developers and leading technical initiatives 3+ years hands-on experience with LLM integration, Generative AI implementations, and AI/ML pipeline development Technical Skills LLM Integration Expertise: Deep experience with major LLM providers' APIs, prompt engineering, fine-tuning, and deployment strategies Integration Technologies: Advanced knowledge of REST/GraphQL APIs, message brokers (Kafka, RabbitMQ), ETL tools, and integration platforms Cloud Platforms: Proficiency with AWS, Azure, or GCP, including serverless architectures, container orchestration, and managed AI services React.js Mastery: Strong expertise in React.js, modern JavaScript (ES6+), TypeScript, and state management libraries Database Integration: Experience with both SQL and NoSQL databases, data modeling, and database integration patterns DevOps & Monitoring: Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes), and observability tools Architecture & Design Software Architecture: Strong understanding of microservices, event-driven architectures, and distributed system design patterns Security & Compliance: Knowledge of API security, data encryption, and compliance frameworks (SOC2, GDPR, etc.) Performance Engineering: Experience in system performance optimization, load balancing, and scalability planning Soft Skills Technical Communication: Excellent ability to communicate complex technical concepts to both technical and business stakeholders Problem-Solving: Strong analytical and troubleshooting skills with a solutions-oriented mindset Adaptability: Comfortable working in fast-paced environments with evolving requirements and emerging technologies Preferred Experience Experience with vector databases and semantic search implementations Knowledge of prompt engineering frameworks and AI agent architectures Background in enterprise software integration (SAP, Salesforce, ServiceNow, etc.) Experience with infrastructure-as-code (Terraform, CloudFormation) Previous experience in AI/ML product development or consulting What You'll Bring to the Team Technical expertise that can tackle the most complex integration challenges Leadership skills to guide and grow a high-performing engineering team Strategic thinking to align technical solutions with business objectives Hands-on mentality with the ability to dive deep into code when needed Innovation mindset to explore and implement cutting-edge AI integration patterns

Posted 1 day ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Data Engineer Location: Chennai, India or Hyderabad, India Workplace Type: Hybrid About the Role We are seeking a highly motivated and experienced Data Engineer to join our dynamic team. In this role, you will be responsible for designing, building, and maintaining our data infrastructure on Google Cloud Platform (GCP). You will work closely with data scientists, analysts, and other engineers to ensure the availability, reliability, and scalability of our data pipelines. The ideal candidate will have a strong background in Python programming, GCP services, and data warehousing concepts. You should be comfortable working in a fast-paced environment and have a passion for solving complex data challenges. This position offers an excellent opportunity to contribute to a growing organization and make a significant impact on our data-driven decision-making processes. You will be involved in the full lifecycle of data projects, from requirements gathering to deployment and monitoring. We are looking for someone who is proactive, detail-oriented, and has excellent communication skills. If you are a talented Data Engineer with a passion for GCP and Python, we encourage you to apply. Key Responsibilities Design, develop, and maintain data pipelines using Python and GCP services (e.g., Dataflow, Dataproc, BigQuery). Build and maintain data warehouses and data lakes on GCP. Implement data quality checks and monitoring to ensure data accuracy and reliability. Collaborate with data scientists and analysts to understand their data needs and provide solutions. Optimize data pipelines for performance and scalability. Automate data ingestion, transformation, and loading processes. Develop and maintain documentation for data pipelines and data models. Troubleshoot and resolve data-related issues. Stay up-to-date with the latest GCP services and data engineering best practices. Participate in code reviews and contribute to the improvement of our data engineering processes. Required Skills & Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. 5-7 years of experience in data engineering. Strong proficiency in Python programming. Extensive experience with GCP services, including Dataflow, Dataproc, BigQuery, Cloud Storage, and Cloud Functions. Experience with data warehousing concepts and technologies. Experience with data modeling and ETL processes. Strong understanding of SQL and database technologies. Experience with data quality and data governance principles. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Experience with version control systems (e.g., Git). Additional Information This position is based in Chennai or Hyderabad and requires candidates who are available to join within an immediate to 20-day notice period. We offer a competitive salary and benefits package, as well as opportunities for professional growth and development. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We are committed to creating an inclusive environment for all employees. The role may require occasional travel to other company locations or client sites. We encourage candidates from all backgrounds to apply. We are looking for someone who is passionate about data and is eager to contribute to our team's success. If you are a highly motivated and skilled Data Engineer, we encourage you to submit your application today.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary The Senior Developer will be responsible for developing and implementing solutions using devops practices, Python programming, and Kubernetes. The primary objective is to enhance system efficiency, automation, and scalability in alignment with the organization's objectives. (1.) Key Responsibilities 1. Develop, and deploy secure and scalable solutions using devops principles. 2. Write efficient, maintainable, and reusable code in python. 3. Implement containerization and orchestration using kubernetes for microservices architecture. 4. Collaborate within team to troubleshoot and optimize system performance. 5. Work on continuous integration and continuous deployment (ci/cd) pipelines. 6. Ensure high levels of security and data protection in all development tasks. 7. Stay updated on industry trends and technologies related to devops, python, and kubernetes. 8. Provide technical guidance and mentorship to junior team members. Skill Requirements 1. Proficiency in devops principles and tools such as docker, jenkins, ansible, and terraform. 2. Strong programming skills in python with experience in django and flask frameworks. 3. Handson experience with kubernetes for container orchestration. 4. Knowledge of cloud platforms like aws, azure, or google cloud. 5. Familiarity with monitoring and logging tools such as prometheus, elk stack, or grafana. 6. Understanding of agile methodologies and working in an agile environment. 7. Good problem-solving and communication skills. 8. Ability to work effectively in a team as well as independently. Certifications: Relevant certifications in devops, Python, and Kubernetes are a plus Location: Chennai Notice Period: Immediate to 60days if relevant and interested, please share your resume to Susan.angelinej@hcltech.com or 9677221121 Thanks & Regards Susan Angeline J HCL Technologies Pvt LTd

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Before you apply It’s important we assess you for the programme that really suits your talents. Please only make one application, and note that if you make more than one we’ll only accept your first. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform. You’ll also be: Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working code Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You’ll also need: Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps, Testing and Agile methodology and associated toolsets A background in solving highly complex, analytical and numerical problems Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies