Jobs
Interviews

149365 Python Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a hands-on Solution Architect / Tech Owner with deep expertise in building and scaling Data & AI products end-to-end. This role demands a strong foundation in cloud-native architecture (AWS) , machine learning (ML) , LLM-based GenAI integration , and a design thinking mindset to translate business needs into resilient, high-performance solutions. The ideal candidate has led the technical delivery of scalable GenAI platforms, architected modular backend systems and frontend interfaces, and built ML pipelines in production environments. You will be responsible for owning system design, guiding technical teams , deploying full-stack infrastructure, and ensuring robust governance and security across all layers. Key Responsibilities Architect and deliver end-to-end GenAI platforms using AWS (ECS, RDS, Lambda, S3) with real-time LLM orchestration and RAG workflows. Design and implement Python microservices with Redis caching and vector search using Qdrant or Redis Vector. Integrate GenAI models and APIs (OpenAI, HuggingFace, LangChain,LangGraph), including containerized inference services and secured API pipelines. Lead frontend architecture using Next.js (TypeScript) with SSR and scalable client-server routing. Own infrastructure automation and DevOps using Terraform, AWS CDK, GitHub Actions, and Docker-based CI/CD pipelines. Manage and optimize data architecture across Snowflake, PostgreSQL (RDS), and S3 for both analytical and transactional needs. Knowledge of database pipeline and data quality, transitioning legacy systems to modular, cloud-native deployments. Champion engineering culture, leading design/code reviews, mentoring team members, and aligning technical priorities with product strategy. Ensure compliance, encryption, and data protection via AWS security best practices (IAM, Secrets Manager, WAF, API Gateway). Ideal Candidate Profile Proven track record as a Solution Architect / Tech Lead on large-scale Data & AI products with GenAI integration. Deep knowledge of AWS cloud services, microservices architecture, and full-stack deployment. Strong understanding of ML lifecycle and productionization of LLMs / GenAI APIs. Practical experience with design thinking, breaking down problems from user need to system delivery. Excellent leadership, communication, and mentoring skills to drive team alignment and technical execution. Skills: aws,snowflake,generative ai

Posted 9 hours ago

Apply

1.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job description We’re Hiring: Machine Learning Trainer (Part-Time, Onsite – Bhandup, Mumbai) Looking to build a future in AI while shaping careers on the ground? We’re offering an exciting dual-track opportunity for passionate professionals who want to train, develop, and be government-recognized for their contribution. 👨‍🏫 The Role: Conduct in-person training sessions in Machine Learning & Python for aspiring professionals at our Bhandup center. ⏰ Timings: 4:00 PM – 8:30 PM 📆 Days: Monday to Saturday 📍 Location: Bhandup (Mumbai) 🧭 Mode: Onsite only 💼 Role Type: Part-Time 🎯 Who This is Perfect For: 👩‍💻 Working ML/Python Developers who want to: - Share their real-world experience in a classroom setting - Upskill through training + live project exposure - Get certified as a government-recognized trainer 🎓 Aspiring Trainers with domain expertise, looking to: - Step into a structured training role - Receive recognition and backing from a reputed skilling initiative - Make an impact in the AI skilling ecosystem ✅ Eligibility (Any one): - B.E./B.Tech in Electronics, Telecom, or IT - Graduate with 1+ year relevant experience or Diploma (3 years post-12th) with 1+ year experience or ITI (post-10th) with 4+ years experience or 12th pass with 4+ years of experience 💡 What You’ll Bring: 🔹 Solid grasp of Machine Learning & Python 🔹 Ability to simplify complex topics 🔹 A confident, energetic classroom presence 🔹 Basic skills in MS Office, PPTs, and digital tools 🎁 Perks of the Role: 📜 Government-recognized trainer certification 🤝 Potential to work on live AI development projects 🧠 Hands-on experience in shaping real careers 🛠️ Build your portfolio as both a developer and mentor Salary - 27,000 Per Month

Posted 10 hours ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

About ICP ICP is a global leader in Content Operations, partnering with some of the world's most recognizable brands, including Unilever, Diageo, Coca-Cola, Mars, P&G, Starbucks, Coty, L’Oréal, NBCU, and Aetna. Our expertise spans content management, logistics, creative operations, production, and digital commerce enablement, ensuring a seamless flow of content across all systems and channels. We empower brands to achieve operational excellence and confidently manage their content. Content confidently.™ With offices in Atlanta, London, Mexico City, Mumbai , and Shanghai , we operate on a global scale, delivering world-class solutions that drive exceptional business outcomes. Who We Are At ICP, our values define us: we are Curious, Focused, Creative, Trustworthy, and Inclusive. We're A People First Company At ICP, we provide benefits that matter to our people and enable us to be engaged both in and outside of work. We foster a culture where work/life balance is nurtured and encouraged, offering hybrid working, generous paid time off, paid holidays, volunteer time off, and Summer half-day Fridays. We also take care of our people with competitive medical, dental, and vision benefits, mental health support, and a robust savings plan. Bring the Confidence Are you a relationship-focused, driven professional with a growth mindset? Do you thrive on breaking through challenges and excelling in competitive environments? You're not expected to have all the answers, but your passion for uncovering solutions and building strong partnerships makes you the perfect fit for this role. We’d love to hear from you! Key Responsibilities Configuration and Customisation Configure Aprimo DAM based on solution designs and requirements gathered in design workshops. Create and manage users, roles, and permissions within Aprimo. Develop and configure metadata schemas, taxonomies, and controlled vocabularies within Aprimo. Design and implement workflows for asset ingestion, approval, distribution, and archival within Aprimo. Configure Aprimo search settings and optimize search results. Implement security policies and access controls within Aprimo. API and Integration Development Utilize the Aprimo API to develop custom integrations with other systems (e.g., CMS, PIM, eCommerce platforms). Develop and implement WebHooks and PageHooks to extend Aprimo functionality and create custom user experiences. Troubleshoot and resolve issues related to Aprimo configurations and integrations Documentation and Training Create and maintain detailed documentation of Aprimo configurations and customizations. Assist with the development of training materials for Aprimo users. Provide support to Aprimo users on configuration-related issues. Ongoing Support and Optimisation Work closely with the Aprimo DAM Lead and other team members to ensure the successful implementation of Aprimo solutions. Participate in design workshops and provide technical input. Stay up-to-date on the latest Aprimo features, functionalities, and best practices. Collaboration and Support Align Aprimo DAM solutions with ICP's overall technology strategy and business objectives. Establish and maintain Aprimo DAM governance standards and best practices, including metadata standards, naming conventions, and security protocols. Work with IT leadership to ensure the security and compliance of Aprimo DAM implementations. What You’ll Have Education & Experience Bachelor's Degree (or equivalent) in a relevant field (e.g., Computer Science, Information Systems, related field). 5+ years of experience configuring and customizing enterprise software applications. 3+ year of experience working with Aprimo DAM (preferred). Technical Expertise Strong understanding of digital asset management principles. Experience configuring metadata schemas, taxonomies, and workflows. Familiarity with REST APIs and web services. Experience with scripting languages (e.g., JavaScript, Python) a plus. Ability to troubleshoot technical issues. Soft Skills Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Detail-oriented and organized. Ability to work independently and as part of a team. Client-focused mindset.

Posted 10 hours ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Key Responsibilities: ● Utilize Python to build and sustain software applications with a strong focus on AI and machine learning. ● Ensure performance, usability, and scalability in AI applications by leveraging advanced Python techniques. ● Identify and resolve issues to maintain low latency and high availability in AI systems. ● Participate in and conduct code reviews, providing constructive feedback on software design and architecture. ● Work with cross-functional teams to define project requirements and scope, ensuring alignment with AI objectives. ● Apply your expertise in AI libraries and frameworks like langchain, openai, llamaIndex, Pandas, Numpy, or similar tools. ● Work with Large Language Models such as GPT-4, Llama and Vector databases like Pinecone, chromadb, FAISS. ● Integrate Python applications with databases, ensuring efficient data storage and retrieval. ● Utilize Amazon Web Services (AWS) for deploying and managing cloud-based AI applications. ● Utilize robust analytical and problem-solving abilities to tackle complex AI challenges. ● Exhibit excellent communication and teamwork skills to collaborate effectively within the team and with stakeholders Key Requirements: ● Degree in Computer Science, Engineering, or a related field. ● Minimum 6 years of relevant experience is a must. ● Proven experience as a Python Developer, with a focus on AI and machine learning projects. ● Strong knowledge of Django, Flask, or similar Python frameworks, with an emphasis on AI integration. ● Proficient in integrating Python applications with databases. ● Experience with Amazon Web Services (AWS) for cloud-based solutions. ● Familiarity with large language model (LLM) frameworks for AI development. ● Familiarity in concepts such as data chunking, embedding, and various similarity search approaches like cosine similarity. Why Join Us? ● Be part of a team that is working on cutting-edge technology products in the AI and SaaS space. ● Experience high growth potential within a pioneering company. ● Engage in a challenging environment where you solve interesting problems every day. ● Work on innovative products that have a real impact on enterprise customers. ● Collaborate with a talented and diverse team of experts in the field. ● Enjoy a flexible work environment with ample opportunities for growth and development. ● Receive a competitive compensation and benefits package.

Posted 10 hours ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. We are seeking an experienced Lead Software Engineer (Full Stack Developer, strong in UI) to join our dynamic team. This role provides an opportunity to lead projects and contribute to high-impact software solutions that are used by enterprises and users worldwide. As a Senior Software Engineer, you will be responsible for the design, development, testing, deployment, and maintenance of complex software systems, as well as mentoring junior engineers. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services. Responsibilities Software Development: Write clean, maintainable, and efficient code or various software applications and systems. Technical Leadership: Lead the design, development, and deployment of complex software applications and systems, ensuring they meet high standards of quality and performance. Project Management: Be able to manage execution and delivery of features and projects, negotiating project priorities and deadlines, ensuring successful and timely completion, with quality. Architectural Design: Participate or lead in design reviews with peers and stakeholders and in the architectural design of new features and systems, ensuring scalability, reliability, and maintainability. Mentorship: Provide technical mentorship and guidance to junior engineers, fostering a culture of learning and growth mindset. Code Review: Diligent about reviewing code developed by other developers, providing feedback and maintain a high bar of technical excellence to make sure code is adhering to industry standard best practices like coding guidelines, elegant, efficient and maintainable code, with observability built from ground up, unit tests etc. Testing: Build testable software, define tests, participate in the testing process, automate tests using, tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and Troubleshooting: Diagnose and resolve technical issues, ensuring high-quality service operations. Service Health and Quality: Maintain the health and quality of services and incidents, proactively identifying and resolving issues. Utilize service health indicators and telemetry for action and provide recommendations to optimize service performance. Lead and conduct thorough root cause analysis and drive the implementation of measures to prevent future recurrences. Dev Ops Model: Understanding of working in a DevOps Model. Taking ownership from working with product management on requirements to design, develop, test, deploy and maintain the software in production. ß Documentation: Properly document new features, enhancements or fixes to the product, and also contribute to training materials. Innovation: Stay current with emerging technologies and industry trends, advocating for their adoption where appropriate to drive innovation and productivity enhancement within the team (E.g., CoPilot) Minimum Qualifications Bachelor’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 4+ years of professional software development experience. Deep expertise in one or more programming languages such as C, C++, C#, .NET, Python, Java, or JavaScript. Extensive experience with software development practices and design patterns. Proven track record of delivering complex software projects on time and within budget. Proficiency with version control systems like GitHub and bug/work tracking systems like JIRA. Understanding of cloud technologies and DevOps principles. Strong problem-solving skills and attention to detail. Excellent communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. Preferred Qualifications Master’s degree in Computer Science, Engineering, or a related technical field. Experience with cloud platforms like Azure, AWS, or GCP. Familiarity with CI/CD pipelines and automation tools. Experience with test automation frameworks and tools. Knowledge of agile development methodologies. Demonstrated ability to mentor and guide junior engineers. Commitment to continuous learning and professional development. Familiarity with developing accessible technologies. Dedicated to diversity and inclusion initiatives. Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation in the Application and Interview Process For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com

Posted 10 hours ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Information Security Specialist (Cloud, VAPT, Forensics, ISO 27001) Location: Noida Sector 62 Role Overview: We are seeking a hands-on security professional to drive cloud security, VAPT, forensics, and ISO 27001 implementation while managing governance, risk, and compliance initiatives. This is a hybrid technical + compliance role requiring the ability to move from high-level strategy to deep technical execution. Key Responsibilities: Implement & maintain ISMS (ISO 27001) and PIMS (ISO 27701) . Conduct VAPT for cloud, web apps, APIs, and infrastructure. Perform digital forensics and incident investigations. Manage cloud security (AWS, Azure, GCP) – IAM, WAF, encryption, security groups. Oversee compliance with NIST, GDPR, DPDP Act & other regulations. Develop & maintain security policies, procedures, audit reports. Coordinate audits & track closure of findings. Deliver security & privacy awareness training . Collaborate with IT, DevOps, and legal to embed security practices. Monitor threat landscape & update frameworks proactively. Hands-On Expectation: This is not just a policy role – you will actively run security tools, perform VAPT, conduct forensics, analyze logs, and configure cloud security controls. Qualifications: 5–8 years in security operations + GRC roles. Proven ISO 27001 & 27701 implementation experience. Strong cloud security and forensics expertise. Certifications: ISO 27001 Lead Auditor/Implementer, CEH, OSCP, AWS Security Specialty, GIAC Forensics (preferred). Preferred Skills: Tools: Burp Suite, Nessus, Acunetix, Qualys, Metasploit, OpenVAS. Cloud Security: AWS GuardDuty, Azure Security Center, GCP SCC. Forensics: FTK, Autopsy, Volatility, Wireshark. Platforms: Splunk, Sentinel, QRadar; EDRs like SentinelOne, CrowdStrike. Scripting: Python, PowerShell, Bash. WAFs: AWS WAF, Cloudflare, F5, Imperva. DevSecOps: Docker, Kubernetes, CI/CD security. Personal Attributes: Strong communicator able to simplify technical concepts. Highly organized with project management skills. Integrity, discretion, and ability to work under pressure.

Posted 10 hours ago

Apply

0.0 years

3 - 6 Lacs

New Delhi G.P.O., Delhi, Delhi

On-site

Python Backend Engineer – Maps & Spatial Data Location : New Delhi Role Overview We are seeking a skilled Python Backend Engineer with expertise in geospatial data handling. The role will focus on building and optimizing backend systems for large-scale map and spatial data processing, including routing, GPS integration, and street-view mapping. Key Responsibilities: 1. Design, develop, and maintain backend APIs using Python and Django/GeoDjango. 2. Manage and optimize spatial databases using PostgreSQL + PostGIS. 3. Implement large, distributed task queues with Celery and RabbitMQ. 4. Integrate Redis for caching and performance improvements. 5. Deploy applications using Gunicorn on Linux-based environments. 6. Handle GIS datasets, including ingestion, querying, and spatial analysis. 7. Work with OSRM for creating routing solutions and generating mapping images for street view. 8. Develop systems for GPS data handling, including parsing, storage, and route mapping. 9. Collaborate with frontend, AI, and data teams to deliver mapping-based features. Required Skills: 1. Strong proficiency in Python and Django/GeoDjango. 2. Hands-on experience with PostgreSQL and PostGIS. 3. Experience with Celery, RabbitMQ, and Redis. 4. Strong experience in SQL Queries. 5. Proficient in Git/Bitbucket workflows. 6. Strong Linux system knowledge. 7. Familiarity with Gunicorn deployment. 8. Proven experience handling large-scale spa;al databases. 9. Practical experience with OSRM routing and street-view mapping workflows. 10. Experience in GPS data processing and integration into mapping systems. Job Type: Permanent Pay: ₹337,771.24 - ₹696,104.60 per year Benefits: Paid sick time Paid time off Provident Fund

Posted 10 hours ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.

Posted 10 hours ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 321770BR Job Type Full Time Your role Do you have a curious mind, want to be involved in the latest technology trends and like to solve problems that have a meaningful benefit to hundreds of users across the bank? Join our Tech Services- Group Chief Technology Office team and become a core contributor for the execution of the banks global AI Strategy, particularly to help the bank deploy AI models quickly and efficiently! We are looking for an experienced Data Engineer or ML Engineer to drive the delivery of an innovative ecosystem of tools and services. In this AI focused role, you will contribute to the development of an SDK for Data Producers across the firm to build high-quality autonomous Data Products for cross-divisional consumption and Data Consumers (e.g. Data Scientists, Quantitative Analysts, Model Developers, Model Validators and AI agents) to easily discover, access data and build AI use-cases. Responsibilities include: direct interaction with product owners and internal users to identify requirements, development of technical solutions and execution develop an SDK (Software Development Kit) to automatically capture Data Product, Dataset and AI / ML model metadata. Also, leverage LLMs to generate descriptive information about assets integration and publication of metadata into UBS's AI Use-case inventory, model artifact registry and Enterprise Data Mesh data product and dataset catalogue for discovery and regulatory compliance purposes design and implementation of services that seamlessly collects runtime evidence and operational information about a data product or model and publishes it to appropriate visualization tools creation of a collection of starters/templates that accelerate the creation of new data products by leveraging a collection of the latest tools and services and providing diverse and rich experiences to the Devpod ecosystem. design and implementation of data contract and fine-grained access mechanisms to enable data consumption on a 'need to know' basis Your team You will be part of the Data Product Framework team, which is a newly established function within Group Chief Technology Office. We provide solutions to help the firm embrace Artificial Intelligence and Machine Learning. We work with the divisions and functions of the firm to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities. One of our current aims is to help a data scientist get a model into production in an accelerated timeframe with the appropriate controls and security. We offer a number of key capabilities: data discovery that uses AI/ML to help users find data and obtain access a secure and controlled manner, an AI Inventory that describes the models that have been built to help users build their own use cases and validate them with Model Risk Management, a containerized model development environment for a user to experiment and produce their models and a streamlined MLOps process that helps them track their experiments and promote their models. Your expertise PHD or Master’s degree in Computer Science or any related advanced quantitative discipline 5+ years industry experience with Python / Pandas, SQL / Spark, Azure fundamentals / Kubernetes and Gitlab additional experience in data engineering frameworks (Databricks / Kedro / Flyte), ML frameworks (MLFlow / DVC) and Agentic Frameworks (Langchain, Langgraph, CrewAI) is a plus ability to produce secure and clean code that is stable, scalable, operational, and well-performing. Be up to date with the latest IT standards (security, best practices). Understanding the security principles in the banking systems is a plus ability to work independently, manage individual project priorities, deadlines and deliverables willingness to quickly learn and adopt various technologies excellent English language written and verbal communication skills About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 10 hours ago

Apply

12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Hi All, We are conducting Face-to-Face Interview on 23rd August at Kolkata LTIMindtree . Below skillset candidates can apply (Share updated cv at chandrani.gupta@ltimindtree.com ) ETL TESTER WITH SQL is Mandatory skill along with that - Automation Java Selenium OR Python Automation Skills Selenium Java Selenium Python with API Automation AWS Cloud data Testing, Cloud Testing, Data Automation, ETL / Datawarehouse testing, Python Data Automation Testing, Selenium-Java -Testing, SQL & Database testing. Location-Kolkata Years of Exp- 4+ to 12 years Notice- immediate to 30days. Interested candidates share cv at chandrani.gupta@ltimindtree.com.

Posted 10 hours ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Why Join 7-Eleven Global Solution Center? When you join us, you'll embrace ownership as teams within specific product areas take responsibility for end-to-end solution delivery, supporting local teams and integrating new digital assets. Challenge yourself by contributing to products deployed across our extensive network of convenience stores, processing over a billion transactions annually. Build solutions for scale, addressing the diverse needs of our 84,000+ stores in 19 countries. Experience growth through cross-functional learning, encouraged and applauded at 7-Eleven GSC. With our size, stability, and resources, you can navigate a rewarding career. Embody leadership and service as 7-Eleven GSC remains dedicated to meeting the needs of customers and communities. Why We Exist, Our Purpose and Our Transformation? 7-Eleven is dedicated to being a customer-centric, digitally empowered organization that seamlessly integrates our physical stores with digital offerings. Our goal is to redefine convenience by consistently providing top-notch customer experiences and solutions in a rapidly evolving consumer landscape. Anticipating customer preferences, we create and implement platforms that empower customers to shop, pay, and access products and services according to their preferences. To achieve success, we are driving a cultural shift anchored in leadership principles, supported by the realignment of organizational resources and processes. At 7-Eleven we are guided by our Leadership Principles . Each principle has a defined set of behaviours which help guide the 7-Eleven GSC team to Serve Customers and Support Stores. Be Customer Obsessed Be Courageous with Your Point of View Challenge the Status Quo Act Like an Entrepreneur Have an “It Can Be Done” Attitude Do the Right Thing Be Accountable About This Opportunity We are building a focused, high-impact analytics pod within 7-Eleven GSC (GBS) to support our restaurant and food service business. This dynamic team will drive insights on category and product performance, operational efficiencies, and customer experience across store levels. Working closely with global analytics leadership (AI included), the pod will deliver foundational reporting, advance dashboards, and develop future-ready AI/ML capabilities for demand forecasting and strategic decision-making. Job Title: Manager - Analytics Location: Bangalore Responsibilities: Design & Develop Analytics Solutions: Lead and support the creation of dashboards and BI reports reflecting restaurant and category business KPIs, programs, and operational variables. Data Analysis & Interpretation: Analyse large, complex datasets such as POS, supply chain, loyalty, customer feedback, and digital engagement to generate actionable insights. Category Management Analytics: Evaluate product and program effectiveness across multiple stores, uncovering nuanced business insights and identifying growth opportunities. AI & Forecasting Alignment: Collaborate with AI teams to gradually integrate predictive models for demand planning, inventory optimization, and staffing forecasts. Report & Communication Excellence: Translate complex analytics into easily understandable, business-focused reports and presentations for technical and non-technical stakeholders. Data Quality & Best Practices: Advocate for clean data, scalable analytics frameworks, and adherence to industry best practices within the team. Cross-functional Collaboration: Partner with IT, Marketing, Operations, Product teams, and other Analytics groups to drive integrated solutions. Mentorship & Leadership: Provide technical guidance and lead the India POD to build analytics capabilities and infrastructure. Technical Skills Required: Data Engineering & Querying: SQL, ETL frameworks, data integration tools. Analytics Programming: Python and/or R for data analysis and modelling. BI & Visualization: Tableau, Power BI, or equivalent reporting platforms. Cloud Platforms: Azure, AWS, or Google Cloud environments. Advanced Analytics (Future-readiness): Familiarity with ML frameworks and AI techniques is desirable. Education & Experience: Bachelor’s or Master’s degree in data Analytics, Computer Science, Statistics, Engineering, or related disciplines. Minimum 7+ years of analytics experience in the QSR, food service, retail, or convenience store sector. Strong proficiency in SQL and at least one analytics programming language (Python or R). Experience with BI/data visualization tools (Tableau, Power BI, or similar). Hands-on working knowledge of restaurant tech stacks including POS systems and related data pipelines. Analytical mindset with proven ability to handle ambiguous data and “read between the lines” in BI reporting. Strong communication skills to present findings clearly to diverse audiences. Familiarity with cloud data platforms (Azure, AWS, Google Cloud) is preferred. Exposure to predictive analytics, machine learning, and AI-driven business applications is a plus. 7-Eleven Global Solution Center is an Equal Opportunity Employer committed to diversity in the workplace. Our strategy focuses on three core pillars – workplace culture, diverse talent and how we show up in the communities we serve. As the recognized leader in convenience, the 7-Eleven family of brands embraces diversity, equity and inclusion (DE+I). It’s not only the right thing to do for customers, Franchisees and employees—it’s a business imperative. Privileges & Perquisites: 7-Eleven Global Solution Center offers a comprehensive benefits plan tailored to meet the needs and improve the overall experience of our employees, aiding in the management of both their professional and personal aspects. Work-Life Balance: Encouraging employees to unwind, recharge, and find balance, we offer flexible and hybrid work schedules along with diverse leave options. Supplementary allowances and compensatory days off are provided for specific work demands. Well-Being & Family Protection: Comprehensive medical coverage for spouses, children, and parents/in-laws, with voluntary top-up plans, OPD coverage, day care services, and access to health coaches. Additionally, an Employee Assistance Program with free, unbiased and confidential expert consultations for personal and professional issues. Top of Form Wheels and Meals: Free transportation and cafeteria facilities with diverse menu options including breakfast, lunch, snacks, and beverages, customizable and health-conscious choices. Certification & Training Program: Sponsored training for specialized certifications. Investment in employee development through labs and learning platforms. Hassel free Relocation: Support and reimbursement for newly hired employees relocating to Bangalore, India.

Posted 10 hours ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About the Role: We are hiring an Android Analyst with a unique focus on code quality, documentation, QA, refactoring, and code comprehension. This role is ideal for candidates who enjoy analyzing code, ensuring technical excellence, best practices in development, and supporting developer teams through structured reviews and process improvements. Responsibilities: Conduct in-depth reviews of codebases to ensure maintainability, scalability, and adherence to clean architecture principles. Do critical comments on best practices that were not followed for the given codebase. Suggest the improvements for the same. Find common bugs in the code, explain the error, suggest the fixes, and refactor the code. Review the code and write unit tests to improve code coverage for the given codebase. Follow the internal QA processes for all deliverables Requirements: B.E./B. Tech/M.S./M. Tech in Computer Science, Engineering, or a related field. 2+ years of relevant industry experience in Android development. Strong logical and analytical skills. Experience in Kotlin and Java programming languages. Experience with Dart (Flutter), Python, and JavaScript. Hands-on experience with Android application development, which includes build systems like Gradle and Bazel, SDKs, and web development & debugging tools like Chrome DevTools and ADB Solid understanding of architectural patterns (MVVM, Clean Architecture) and best practices in Android development and web development. Experience with code review, unit testing, documentation, and QA processes

Posted 10 hours ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Our client is a consumer goods company and are looking for a highly skilled and detail-oriented Supply Planning Specialist. The consultant will help driving supply planning processes, ensuring optimal inventory levels, and collaborating with multiple stakeholders across functions to meet business goals. Key Responsibilities :- Develop and execute effective supply plans to meet customer demand while optimizing inventory levels. Work closely with production, procurement, logistics, and factory teams to ensure seamless coordination and on-time delivery. Use SAP to manage and monitor supply chain data, inventory movements, and production schedules. Analyze data using advanced Excel tools to identify trends, variances, and opportunities for efficiency improvements. Monitor and manage inventory levels to avoid excess and stockouts, ensuring healthy inventory turnover. Drive the S&OP process with cross-functional teams to align supply with demand forecasts. Coordinate with internal and external stakeholders to resolve supply constraints and bottlenecks. Generate periodic reports and dashboards to provide visibility into supply chain performance metrics. Please Note - This is a Hybrid role based out to Mumbai Qualifications: ● Bachelor’s or Master’s degree in Operations Management, Industrial Engineering, Business Analytics, Supply Chain, or a related field. ● 3+ years of experience in capacity planning, operations analysis, workforce planning, or related areas. ● Strong analytical and modeling skills with advanced Excel and experience using tools like SQL, Python, R, or similar. ● Experience with planning and forecasting platforms (e.g., Anaplan, SAP IBP, Oracle). Skills Required Qualifications Required : ● Bachelor’s or Master’s degree in Operations Management, Industrial Engineering, Business Analytics, Supply Chain, or a related field. ● 3+ years of experience in capacity planning, operations analysis, workforce planning, or related areas. ● Strong analytical and modeling skills with advanced Excel and experience using tools like SQL, Python, R, or similar. ● Experience with planning and forecasting platforms (e.g., Anaplan, SAP IBP, Oracle, Capacity Planning Tools) ● Solid understanding of operational and resource planning concepts (e.g., headcount, utilization, throughput, lead times). ● Excellent communication and stakeholder management skills. ● Ability to think strategically, anticipate business needs, and influence decision-making.

Posted 10 hours ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Sholinganallur, Chennai, Tamil Nadu

On-site

Role Overview: We’re seeking a Full Stack Developer Intern skilled in React (frontend) , Django (backend) , and PostgreSQL (database) to join our engineering team. You’ll work on building scalable, secure, and user-friendly web applications for our healthcare platform, contributing to both new feature development and optimization of existing systems. Key Responsibilities: Develop and maintain responsive web interfaces using React . Build and enhance backend services and APIs using Django . Design, query, and optimize PostgreSQL databases. Integrate frontend and backend components for smooth user experiences. Debug, troubleshoot, and resolve application issues. Collaborate with designers, product managers, and other developers to deliver features. Ensure clean, maintainable, and well-documented code. Required Skills: Strong understanding of React.js , React hooks, and component-based architecture. Proficiency in Python and Django for backend development. Hands-on experience with PostgreSQL (schemas, queries, optimization). Familiarity with RESTful API design and integration. Knowledge of HTML, CSS, Tailwind and responsive web design principles. Version control with Git and GitHub. Preferred Skills: Basic understanding of cloud deployment (AWS, Azure, or GCP). Experience with authentication (JWT, OAuth2). Knowledge of Django REST Framework (DRF). Interest in AI/ML integration in web applications. Education: Pursuing or recently completed a degree in Computer Science, IT, or related field. Perks: Hands-on work with cutting-edge AI healthcare technology . Flexible and collaborative startup culture. Mentorship from experienced engineers. Opportunity to work on real-world telemedicine applications. How to Apply: Send your resume, portfolio/GitHub, and a brief introduction to [careers@aevevotechnology.com] with the subject line: “Full Stack Developer Intern – React/Django” . Job Type: Internship Contract length: 6 months Pay: ₹5,000.00 - ₹10,000.00 per month Ability to commute/relocate: Sholinganallur, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): How do you rate yourself in React (0-10)? How do you rate yourself in Django (0-10)? How do you rate yourself in PostGreSQL (0-10)? How do you rate yourself in Tailwind (0-10)? Have did a python full stack development course in intitute? Education: Bachelor's (Preferred) Experience: React: 1 year (Preferred) Django: 1 year (Preferred) Tailwind: 1 year (Preferred) Work Location: In person

Posted 10 hours ago

Apply

2.0 - 3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Relocation Assistance Offered Within Country Job Number #168511 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specialising in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. Brief introduction - Role Summary/Purpose: The candidate will support the Colgate Business Teams in Marketing and Insights functions across the globe. The role requires a person to have understanding of Internal & external data (Syndicated Market Data, Point of Sales etc.) and ability to provide Insights based Services & solutions. The Person should have abilities to build insights from large External and Internal datasets. An Analytical problem solver with focus on Business Intelligence and Insights. Ability to work in a collaborative and customer focused (proactive and Responsive to Business needs) . Excellent Written and verbal communication skills Responsibilities: Build Insights and Competition Intelligence solutions Work on Connected data solutions, building automated insights and reports Work on different datasets & systems (Marketing, Customers, Product masters, Finance, Digital, Point of Sales) and link the business rationales to develop & support Insights and Analytics Build & support standard Business evaluation Trackers & Dashboards per agreed to SLAs and respond to ad hoc requests for reporting and first level analysis Data Quality and Sanity is essential so validating the data, trackers and dashboards is prime Communicate and coordinate with Divisions and subsidiaries as part of investigation and resolution of discrepancies You will engage with Business teams in Corporate, Divisions, Hub (Cluster of Countries) and countries to understand business requirements and collaborate on solutions Work with Internal Analytics teams & Information technology teams to learn and advance on developing sustainable and standard reporting trackers Partner with external data vendors to ensure timely data availability with appropriate data sanity With constantly evolving business environment, you will find out different ways to tackle the business problem through Analytical solutions (Data transformation, Data Visualization, Data Insights) Required Qualifications: Graduate in Engineering/Sciences/Statistics , MBA Experience with third-party data i.e. syndicated market data (Nielsen, Kantar, IRI) Point of Sales, etc. Minimum 2-3 years experience working in Data Insights / Analytics role Should have worked in a client facing / stakeholder management role to understand business needs and draw hypothesis Working knowledge of consumer packaged goods industry Knowledge of Data Transformation tools - R/Python, Snowflake Working knowledge of visualization tools like Tableau, DOMO, Lookerstudio, Sigma Ability to Read , Analyze and Visualize data Effective Verbal & Written Communication for Business engagement Excellent presentation/visualization skills Preferred Qualifications: Created/worked on automated Insights solution Worked on Competition Intelligence solutions Understanding of Colgate’s processes, and tools supporting analytics (for internal candidates) Willingness and ability to experiment with new tools and techniques Good facilitation and project management skills Our Commitment to Inclusion Our journey begins with our people—developing strong talent with diverse backgrounds and perspectives to best serve our consumers around the world and fostering an inclusive environment where everyone feels a true sense of belonging. We are dedicated to ensuring that each individual can be their authentic self, is treated with respect, and is empowered by leadership to contribute meaningfully to our business. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, colour, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation.

Posted 10 hours ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position: Technical Project Manager Experience : 6+ Years Location: Ahmedabad We are looking for an experienced Project Manager with a strong background in software development to oversee and drive the success of web and mobile projects. The ideal candidate will ensure smooth project execution, effective client communication, and timely delivery. Key Responsibilities: Lead cross-functional teams on technical projects from initiation to completion Monitor project schedules, budgets, resources, and expenditures Ensure timely delivery within scope and quality standards Facilitate communication between departments to maintain alignment Coordinate client meetings, document decisions, and manage expectations Identify risks, handle issues, and implement project changes as needed Track progress, ensure client satisfaction, and maintain project documentation Explore opportunities to enhance efficiency and profitability Requirements: Background in software development (coding experience preferred in Python/PHP) Strong communication and client management skills Experience in web/mobile project delivery Proficient in Microsoft Office (Word, Excel, Outlook) Excellent organizational, multitasking, and time-management skills

Posted 10 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Title: Senior Python and Azure Engineer Location: Remote & Hyderabad, INDIA Job Type: Contract and Immediate joiners Job Description: We are looking for a Senior Python Engineer to develop and optimize scalable backend systems, APIs, and automation workflows . You will work on building high-performance applications, integrating third-party APIs, and designing efficient, cloud-native architectures. Responsibilities: Design, develop, and maintain scalable backend services and APIs using Python/Fast API and frameworks like FastAPI, Flask, . Optimize database performance using SQL/NoSQL databases (PostgreSQL, MySQL, MongoDB, Redis). Implement asynchronous programming for high-throughput applications using , Kafka, RabbitMQ . Work with microservices architecture , containerization (Docker), and orchestration (Kubernetes). Implement CI/CD pipelines for continuous deployment and testing. Collaborate with data engineers to process and optimize large-scale datasets . Develop and integrate machine learning pipelines where necessary. Write efficient, well-documented, and maintainable code following best practices. Conduct code reviews and mentor junior developers . Ensure high system security, performance, and scalability . Requirements: 10+ years of experience in software development and 5+ in Python Strong understanding of OOP, design patterns, and SOLID principles . Experience in building RESTful and GraphQL APIs . Expertise in SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, Elasticsearch). Knowledge of multi-threading, concurrency, and asynchronous programming . Hands-on experience with cloud platforms (AWS, GCP, Azure) and serverless computing . Experience working with DevOps tools, CI/CD pipelines, and containerization . Strong debugging and performance optimization skills. Proficiency in unit testing and test-driven development (TDD) .

Posted 10 hours ago

Apply

4.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Our client is a professionally-run private investment office managing a diversified portfolio across listed equities and startup investments. We combine rigorous fundamental analysis with a long-term perspective, seeking to generate sustainable value for our stakeholders. Job Description: We are seeking a skilled Equity Research Analyst to join the client team in Jaipur. You will be responsible for conducting thorough financial analysis and providing investment recommendations across both listed equities and startup/VC investments. The role requires strong analytical skills, sector knowledge, and the ability to deliver actionable insights that shape our portfolio strategies. Key Responsibilities: ∙Conduct in-depth fundamental analysis of companies, industries, and market trends. ∙Evaluate opportunities in both listed equities and startup/VC space. ∙Generate and present well-researched investment ideas to the team. ∙Build and maintain detailed financial models to forecast performance and assess valuations. ∙Stay abreast of market developments, regulatory changes, and economic trends. ∙Collaborate with the investment team to shape and execute strategies. Qualifications: ∙∙ 3–4 years of experience in equity research, startup/VC analysis, or investment analysis. ∙ Proven ability to conduct independent research and deliver actionable insights. ∙Strong quantitative and qualitative analytical skills. ∙Excellent written and verbal communication. ∙ Proficiency in financial modeling and valuation techniques. ∙Familiarity with financial databases and software (Bloomberg, FactSet, Excel, etc.). ∙Knowledge of coding (Python or similar) is an added advantage. Additional Information: This role offers the opportunity to join a dynamic, close-knit team where your work will directly influence portfolio decisions across public and private markets. We offer competitive compensation, benefits, and opportunities for professional development. If you're motivated and passionate about investment research in both public and private markets, we’d love to hear from you.

Posted 10 hours ago

Apply

3.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Company Description Nykaa is a digitally native, consumer-tech company that offers a wide range of beauty, personal care and fashion products. Since its inception in 2012, Nykaa has disrupted the beauty retail market in India and captured the hearts of millions of customers. Besides offering engaging and educational content, we have diversified our offerings through other online platforms like Nykaa Fashion, Nykaa Man, and Superstore. Role: This is a full-time on-site role for a Machine Learning Engineer/Scientist in Bengaluru. As a Machine Learning Engineer/Scientist, you will design and deploy machine learning models to solve complex business problems. You will be responsible for developing and implementing statistical and machine learning algorithms, managing large datasets, and working collaboratively with cross-functional teams. You will be working on the interesting problem areas such as Personalization, Customer Growth, Demand Forecasting & Inventory Management and other DS problems. Key Skills Minimum 3 years experience Strong background in statistics, machine learning and deep learning Expertise in pattern recognition, neural networks, and ML algorithms Proficiency in statistical tools, Python programming language along with ML libraries (Scikit-Learning, XGBoost and LightGBM etc) Exposure to DL frameworks such as Keras, Tensorflow and Pytorch Have a sound understanding of modeling pipelines, ML architecture and MLOps Excellent problem-solving skills and attention to detail Ability to work collaboratively in a fast-paced environment Experience with Causal Inference and Experimentation would be an advantage Experience in Consumer tech experience would be a plus Experience on Search, Ranking, Relevance, Recommendations is highly preferred

Posted 10 hours ago

Apply

0.0 - 2.0 years

5 - 12 Lacs

Navi Mumbai, Maharashtra

On-site

Company name: PibyThree consulting Services Pvt Ltd. Location :Navi Mumbai Start date : ASAP Job Description : We are seeking an experienced Data Engineer to join our team. The ideal candidate will have hands-on experience with Azure Data Factory (ADF), Snowflake, and data warehousing concepts. The Data Engineer will be responsible for designing, developing, and maintaining large-scale data pipelines and architectures. Key Responsibilities: Design, develop, and deploy data pipelines using Azure Data Factory (ADF) Work with Snowflake to design and implement data warehousing solutions Collaborate with cross-functional teams to identify and prioritize data requirements Develop and maintain data architectures, data models, and data governance policies Ensure data quality, security, and compliance with regulatory requirements Optimize data pipelines for performance, scalability, and reliability Troubleshoot data pipeline issues and implement fixes Stay up-to-date with industry trends and emerging technologies in data engineering Requirements: 4+ years of experience in data engineering, with a focus on cloud-based data platforms (Azure preferred) 2+ years of hands-on experience with Azure Data Factory (ADF) 1+ year of experience working with Snowflake Strong understanding of data warehousing concepts, data modeling, and data governance Experience with data pipeline orchestration tools such as Apache Airflow or Azure Databricks Proficiency in programming languages such as Python, Java, or C# Experience with cloud-based data storage solutions such as Azure Blob Storage or Amazon S3 Strong problem-solving skills and attention to detail Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Ability to commute/relocate: Navi Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Current CTC Expected CTC Notice Period Education: Bachelor's (Preferred) Experience: total work: 4 years (Preferred) Pyspark: 2 years (Required) Azure Data Factory: 2 years (Required) Databricks: 2 years (Required) Work Location: In person

Posted 10 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About client: It is a world leader in fire & life safety solutions tailored for complex commercial facilities to homes. Through iconic, industry-defining brands including Kidde, Kidde Commercial, Edwards, GST, Badger, Gloria and Aritech, we provide residential and commercial customers with advanced solutions and services to protect people and property in a wide range of applications, all around the globe. offers a wide range of products and services, including HVAC, refrigeration, and fire & security solutions. The company has a global presence with a diverse workforce and a focus on innovation and sustainability. Job Title: Senior Cybersecurity Engineer · Mode of Interview: Virtual · Location: Hyderabad · Experience: 10+ · Mode of Work : Work from office · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Project Tenure: Long-term project Job Description: Role Responsibilities: Design and implement enterprise-grade security solutions across cloud and on-prem environments. Lead incident response, threat modeling, risk assessments, and vulnerability management initiatives. Monitor, detect, and respond to security incidents using SIEM, EDR, and other tools. Develop and enforce security policies, standards, and best practices. Collaborate with DevOps, IT, and software engineering teams to integrate security into the SDLC. Conduct security audits, penetration tests, and red/blue team exercises. Stay current with emerging threats, vulnerabilities, and regulatory requirements (e.g., NIST, ISO 27001, GDPR, HIPAA). Role Purpose: We are seeking a highly skilled and experienced Senior Cybersecurity Engineer to join our growing security team. In this role, you will be responsible for designing, implementing, and maintaining advanced security solutions to protect our infrastructure, applications, and data. You will play a key role in threat detection, incident response, and security architecture, ensuring our systems remain resilient against evolving cyber threats. Minimum Requirements: Bachelor’s or Master’s degree in Computer Science, Information Security, or related field. 10 years of experience in cybersecurity engineering or related roles. Strong knowledge of network security, cloud security (AWS, Azure, or GCP), and endpoint protection. Proficiency with tools such as Splunk, CrowdStrike, Palo Alto, Nessus, Wireshark, etc. Experience with scripting and automation (Python, Bash, PowerShell). Familiarity with security frameworks and compliance standards (e.g., CIS, NIST, SOC 2). Excellent problem-solving, communication, and analytical skills. Industry certifications such as CISSP, OSCP, CEH, CISM, or AWS Security Specialty. Experience with Zero Trust Architecture and Identity & Access Management (IAM). Background in incident response, digital forensics, or threat intelligence. Additional Job Description Summary Cyber expert, recognized as a thought leader in Cybersecurity. Distributes directives, vulnerability, and threat advisories to identified consumers. Job Description Leads, designs and develops new systems, applications, and solutions for cybersecurity platforms Leads the integration of new cyber architectural features into existing infrastructures. Leads architectural analysis of cybersecurity solutions and relates existing systems to future needs and trends. Recommends incident response procedures and researches potential network vulnerabilities. Assesses and resolves user access queries related to security controls. Leads identity access management initiatives internally. Supervises internal and external cyber audits. May interact with external parties as it relates to cyber regulations.

Posted 10 hours ago

Apply

0 years

0 Lacs

India

On-site

About the Company ZeTheta Algorithms Private Limited is a FinTech start-up in India which has been recently set up and is developing innovative AI tools. https://www.instagram.com/zetheta.official About the Role We are seeking a talented and motivated student intern for Fixed Income Portfolio Manager Role. This is an extraordinary opportunity for a self-driven, financially skilled student with an eye for banking. Responsibilities Practical assignments associated to fixed income investment and analysis with simulations in: Fixed Income Analysis & Valuation: Calculate Yield to Maturity (YTM) and assess returns on different types of fixed-income securities. Determine Present Value (PV) of securities and assess market pricing strategies. Compare investment options such as corporate bonds, fixed deposits, and mutual funds. Quantitative & AI-based Financial Modelling: Develop financial models in Excel, Python, or R to assess risk and return metrics. Implement AI-driven approaches for analyzing credit risk and probability of default. Work on Value at Risk (VaR) simulations and machine learning models for risk assessment. Debt Market & Credit Research: Analyze corporate bond spreads, relative valuations, and structured finance instruments. Conduct data cleaning and visualization for sovereign credit research and CDS time series data. Assist in the structuring and evaluation of project finance and asset-backed securities. Technology & Automation in Finance: Understand Microsoft Excel AI tools for financial modelling. Develop and test AI models for credit derivatives and portfolio risk assessment. Work on FinTech tools like Virtual Risk Analyser and Virtual Portfolio Analyser. Qualifications A student from any academic discipline. Internship Details • Duration: Self paced with option of 1, 2, 3 or 4 months) • Type: Unpaid

Posted 10 hours ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Description Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Creates and maintains comprehensive business documentation including user stories, acceptance criteria, and process flows that help the BIE understand the context for developing ETL processes and visualization solutions. Performs user acceptance testing and business validation of delivered dashboards and reports, ensuring that BIE-created solutions meet actual operational needs and can be effectively utilized by site managers and operations teams. Monitors business performance metrics and operational KPIs to proactively identify emerging analytical requirements, working with BIEs to rapidly develop solutions that address real-time operational challenges in the dynamic AI-enhanced fulfillment environment. About The Team The Global Operations – Artificial Intelligence (GO-AI) team remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Basic Qualifications Experience defining requirements and using data and metrics to draw business insights Knowledge of SQL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages Knowledge of Python, VBA, Macros, Selenium scripts 1+ year of experience working in Analytics / Business Intelligence environment with prior experience of design and execution of analytical projects Preferred Qualifications Experience in using AI tools Experience in Amazon Redshift and other AWS technologies for large datasets Analytical mindset and ability to see the big picture and influence others Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Good oral, written and presentation skills combined with the ability to be part of group discussions and explaining complex solutions Ability to apply analytical, computer, statistical and quantitative problem solving skills is required Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A3027310

Posted 10 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Basic Scope of job As a Cloud & Server Engineer, You will be responsible for the administration, support, and optimization of both Azure cloud ,on-prem server and Kubernetes cluster environments. You will take ownership of incidents, execute infrastructure changes, and contribute to the design, implementation, and maintenance of core infrastructure services including cloud networking, storage, and backup solutions. You will also drive improvements in system performance, security, and cost optimization across both cloud and on-prem platforms. Duties & Responsibilities Cloud Manage & Support • Manage and support on-prem ,Azure server infrastructure (VMs, OS, backups, storage, networking) and Kubernetes Cluster with Rancher. • Monitor cost and implement Azure governance practices (e.g., tagging, reserved instances). • Maintain cloud security posture (e.g., PFsense, firewalls, identity/access). • Automate operational tasks using scripting tools (PowerShell, Azure CLI, Logic Apps , Ansible). • Perform patch management on Linux system and ensure security compliance across environments. • Monitor the system using tools such as Grafana, CheckMK , Huawei DigitialView. • Contribute to monthly/quarterly health reports and environment reviews. Cloud Infra & Kubernetes Cluster Administration 1. Management & Maintenance • Provision VM, Install, configure development, staging, and production environments. • Keep virtual environment up to date and healthy with routine maintenance and housekeeping activities and coordinate with vendor to solve any infrastructure-related issues. • Setting up virtual machines based on the demands of various workloads, including assigning virtual CPUs, memory, and storage • Establishing virtual networks, VLANs, and subnets to ensure that VMs and applications can communicate securely and efficiently. • Perform virtual storage resources, ensuring high availability (HA), redundancy, and optimization based on different storage tiers (SSD, HDD). • Support & manage M365. • Managing user roles, privileges, and multi-factor authentication to ensure that only authorized personnel can make changes or access critical resources • creation and configuration of Kubernetes clusters • Set up authentication, authorization and Cluster Monitoring and Logging • Monitor cluster health and performance using Prometheus or Grafana • Set up centralized logging • Support to configure alerting (Prometheus) • Cluster Upgrades and Patching • Upgrade Kubernetes versions and components • Apply security patches to Kubernetes and container runtimes • Support in scaling and Resource Management • Set resource requests and limits for containers • Manage node and pod failure handling (rescheduling) • Test disaster recovery and backups • Manage Secrets and sensitive data • Implement network policies for communication control • Support during applications Deployments • Set up Load Balancers and Services 2. Performance Tuning • Perform Regular monitoring CPU, memory, storage, and network utilization to prevent bottlenecks or resource exhaustion • Running diagnostic tools to ensure system health and to preemptively address potential issues in hardware or software 3. Backup and Recovery • Design and implement regular backup strategies based on the best practices. • Backup Job Setup: Configure backup jobs to define the schedule, retention policy, and target repository for backup data. • Backup Scheduling: Set up daily, weekly, or on-demand backups depending on the business needs. • Backup Integrity Check: Regularly verify that backups are successful and free from errors by running backup verification jobs. • SureBackup: Test backups in an isolated environment to ensure that they are recoverable and operational. • Restore Testing: Periodically restore files or entire virtual machines (VMs) to validate that the restore process works smoothly and quickly. • Replication Jobs: Configure replication of VMs to another site for disaster recovery (DR) purposes. • Failover and Failback: Test and perform failover to replicated environments in the event of a disaster and failback to the primary site once the issue is resolved. • Backup Job Setup: Configure backup jobs to define the schedule, retention policy, and target repository for backup data. • Backup Scheduling: Set up daily, weekly, or on-demand backups depending on the business needs. • Backup Integrity Check: Regularly verify that backups are successful and free from errors by running backup verification jobs • SureBackup: Test backups in an isolated environment to ensure that they are recoverable and operational. • Restore Testing: Periodically restore files or entire virtual machines (VMs) to validate that the restore process works smoothly and quickly. • Replication Jobs: Configure replication of VMs to another site for disaster recovery (DR) purposes. • Failover and Failback: Test and perform failover to replicated environments in the event of a disaster and failback to the primary site once the issue is resolved. 4. Security and Access Control • Manage user roles and privileges using least-privilege principles. • Perform security hardening and compliance. • Implementing SIEM on the system. 5. Replication and High Availability • Configuring available infrastructure native HA features for automatic failover of VMs and using replication or DR tools to ensure business continuity in case of a site failure. 6. Monitoring and Alerting • Use tools like Grafana , CheckMK for health and performance. • Monitoring logs and alerts to detect anomalies or failures in the infrastructure 7. Automation and Scripting • Automate routine tasks using Ansible. • Schedule recurring jobs with cron or orchestration tools like Airflow. 8. Documentation and Standards • Maintain detailed documentation of Cloud environments and procedures. • Documenting incidents, solutions, and changes made during the troubleshooting process for accountability and future reference Analytics & Visualization • Analyze complex datasets to uncover trends, patterns, and actionable insights. • Translate stakeholder requirements into KPIs, reports, and dashboards. • Design, build, and maintain dashboards • Manage reporting layers including deployment, version control, and performance tuning. • Collaborate with Product Owners, Engineers, and Data Scientists to align data strategy with business goals. Stakeholder Collaboration & Leadership • Act as a liaison between technical teams and business stakeholders. • Partner with internal teams to gather and refine reporting requirements. • Mentor junior analysts and support a culture of data literacy. • Standardize reporting across departments and ensure alignment with company-wide metrics. Education & Qualification • 5+ years of experience in infrastructure or cloud and Kubernetes administration roles. • Strong experience in Unix OS, and Azure IaaS components.. • Skilled in troubleshooting and resolving system and cloud performance issues. • Experience with patching, backup (Veeam) DR (Azure Backup & ASR), and automation. • Familiarity with scripting languages like Python or Bash and automation tool Ansible. • Familiarity with monitoring platforms (Grafana, CheckMK). • Knowledge of ITIL processes and change management • Relevant certifications preferred (e.g., RHCSA, CKA, AZ-900, AZ-104, AZ-500, AZ-700, VMware Certified Technical Associate (VCTA), VMware Certified Professional (VCP), , Microsoft 365 Certified Fundamentals (MS-900) Please share CV to Z.Uddin@diyarme.com

Posted 10 hours ago

Apply

5.0 years

4 - 15 Lacs

Kochi, Kerala, India

On-site

Job Title: Senior DevOps Engineer Experience Required: 5 to 7 years Notice Period: Immediate to 1 month CTC: 15 LPA Location: Ernakulam Job Description We are seeking a highly skilled and experienced Senior Cloud Solutions Architect with a profound expertise in AWS and Azure platforms. This role is pivotal in designing, implementing, and managing advanced cloud solutions to drive business innovation and efficiency. The ideal candidate will possess a robust technical background in cloud services, including but not limited to compute, storage, networking, security, and developer tools on both AWS and Azure platforms. Responsibilities Design and implement scalable, secure, and cost-efficient cloud solutions using AWS services such as EC2, S3, RDS, Lambda, CloudFormation, and Azure services including VMs, Blob Storage, SQL Database, Functions, and ARM Templates. Architect and deploy hybrid and multi-cloud solutions integrating AWS and Azure with on-premises environments, leveraging services like AWS Direct Connect, Azure ExpressRoute, and VPN Gateways. Develop automation and orchestration strategies to streamline cloud deployments and operations using tools like AWS CloudFormation, Azure Resource Manager (ARM), Terraform, and Ansible. Ensure optimal cloud security posture by implementing and managing security and compliance tools, such as AWS Identity and Access Management (IAM), Azure Active Directory (AD), AWS Key Management Service (KMS), Azure Key Vault, and AWS Shield. Optimize cloud resources and costs using tools and techniques like AWS Cost Explorer, Azure Cost Management + Billing, AWS Trusted Advisor, and Azure Advisor. Lead cloud migration projects, employing AWS Migration Services and Azure Migrate, to seamlessly move workloads from on-premises or other clouds to AWS and Azure. Stay current with the latest in cloud technology, applying best practices from AWS Well-Architected Framework and Azure Architecture Framework to design and implement solutions that meet business and technical requirements. Support the business development lifecycle (Business Development, Capture, Solution Architect, Pricing and Proposal Development). Develop tools and scripts to improve efficiency of operational tasks and implement monitoring processes and design/deploy monitoring dashboards. Help to maintain and monitor production environments. Experience in Linux, Windows administration, and troubleshooting. Qualifications Minimum of 5 years of experience in designing, implementing, and managing solutions on AWS and Azure. Minimum of 5 years working in Linux and Windows environments. Minimum of 5 years scripting experience with Bash, Python, PowerShell. Certifications such as AWS and Azure Certifications and others relevant to cloud computing. Deep technical knowledge of cloud computing technologies, cloud storage options, cloud-native applications, serverless architectures, and containerization services. Skills Expertise in networking and security services across AWS and Azure, including VPC, Route 53, Azure DNS, Network Security Groups, and Application Gateway. Experience administering databases such as Postgres, MariaDB, MySQL, and/or MSSQL. Proficient in scripting and automation tools (e.g., Python, PowerShell, Bash). Strong analytical, troubleshooting, and problem-solving skills. Exceptional communication and project management abilities to lead cross-functional teams through complex cloud projects. Skills: aws,azure,window,linux,python,powershell,bash

Posted 10 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies