Jobs
Interviews

4269 Fastapi Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

6 - 12 Lacs

Ahmedabad

Work from Office

Perks & Benefits: 5 Days Working Group Medical Insurance Professional Development & Culture No Bars for the Right Candidate No Bond Policy Position Summary: We are looking for a Full-Stack Python Developer with strong expertise in Python and the FastAPI framework to develop and maintain scalable web applications and APIs. The ideal candidate should have a solid understanding of backend development with experience in handling authentication, API integrations, and task scheduling. Knowledge of React, JavaScript, and modern frontend frameworks is a plus. Required Skills: Strong proficiency in Python and extensive knowledge of the FastAPI framework. Solid understanding of Python's core concepts (data structures, OOP, exception handling, and performance optimization). Experience with API development, including authentication (OAuth2, JWT), API Gateway, and external API integration. Proficiency in cron jobs and task scheduling to automate processes. Understanding of asynchronous programming and event-driven architectures. Strong experience with Git and version control systems. Good knowledge of HTML, CSS, and JavaScript for effective collaboration with front-end teams. Preferred Skills: Experience with ReactJS, NextJS, and other front-end libraries. Knowledge of Node.js for building microservices or working across stacks. Experience with Docker, CI/CD pipelines, and cloud platforms (AWS, Azure, GCP). Familiarity with SQL and NoSQL databases. Exposure to caching strategies (Redis, Memcached) and message queues (RabbitMQ, Kafka). Qualifications: Bachelors degree in Computer Science, Engineering, or a related field. 2+ years of hands-on experience in Python and FastAPI development.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Delhi, India

On-site

About Us Bain & Company is a global consultancy that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who You Will Work With The Consumer Products Center of Expertise collaborates with Bain’s global Consumer Products Practice leadership, client-facing Bain leadership and teams, and with end clients on development and delivery of Bain’s proprietary CP products and solutions. These solutions aim to answer strategic questions of Bain’s CP clients relating to brand strategy (consumer needs, assortment, pricing, distribution), revenue growth management (pricing strategy, promotions, profit pools, trade terms), negotiation strategy with key retailers, optimization of COGS etc. You will work as part of the team in CP CoE comprising of a mix of Director, Managers, Projects Leads, Associates and Analysts working to implement cloud-based end-to-end advanced analytics solutions. Delivery models on projects vary from working as part of a CP Center of Expertise, broader global Bain case team within the CP ringfence, or within other industry CoEs such as FS / Retail / TMT / Energy / CME / etc with BCN on need basis The AS is expected to have a knack for seeking out challenging problems and coming up with their own ideas, which they will be encouraged to brainstorm with their peers and managers. They should be willing to learn new techniques and be open to solving problems with an interdisciplinary approach. They must have excellent coding skills and should demonstrate a willingness to write modular, reusable, and functional code. What You’ll Do Collaborate with data scientists working with Python, LLMs, NLP, and Generative AI to design, fine-tune, and deploy intelligent agents and chains-based applications. Develop and maintain front-end interfaces for AI and data science applications using React.js / Angular / Nextjs and/or Streamlit/ DASH, enhancing user interaction with complex machine learning and NLP-driven systems. Build and integrate Python-based machine learning models with backend systems via RESTful APIs using frameworks like FastAPI / Flask or Django. Translate complex business problems into scalable technical solutions, integrating AI capabilities with robust backend and frontend systems. Assist in the design and implementation of scalable data pipelines and ETL workflows using DBT, PySpark, and SQL, supporting both analytics and generative AI solutions. Leverage containerization tools like Docker and utilize Git for version control, ensuring code modularity, maintainability, and collaborative development. Deploy ML-powered and data-driven applications on cloud platforms such as AWS or Azure, optimizing for performance, scalability, and cost-efficiency. Contribute to internal AI/ML Ops platforms and tools, streamlining model deployment, monitoring, and lifecycle management. Create dashboards, visualizations, and presentations using tools like Tableau/ PowerBI, Plotly, and Seaborn to drive business insights. Proficient with Excel, and PowerPoint by showing proficiency in business communication through stakeholder interactions. About You A Master’s degree or higher in Computer Science, Data Science, Engineering, or related fields OR Bachelor's candidates with relevant industry experience will also be considered. Proven experience (2 years for Master’s; 3+ years for Bachelor’s) in AI/ML, software development, and data engineering. Solid understanding of LLMs, NLP, Generative AI, chains, agents, and model fine-tuning methodologies. Proficiency in Python, with experience using libraries such as Pandas, Numpy, Plotly, and Seaborn for data manipulation and visualization. Experience working with modern Python frameworks such as FastAPI for backend API development. Frontend development skills using HTML, CSS, JavaScript/TypeScript, and modern frameworks like React.js; Streamlit knowledge is a plus. Strong grasp of data engineering concepts – including ETL pipelines, batch processing using DBT and PySpark, and working with relational databases like PostgreSQL, Snowflake etc. Good working knowledge of cloud infrastructure (AWS and/or Azure) and deployment best practices. Familiarity with MLOps/AI Ops tools and workflows including CI/CD pipelines, monitoring, and container orchestration (with Docker and Kubernetes). Good-to-have: Experience in BI tools such as Tableau or PowerBI, Good-to-have: Prior exposure to consulting projects or CP (Consumer Products) business domain. What Makes Us a Great Place To Work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

🚀 We're Hiring: Founding Backend Python Developer @ Vakta.AI 🚀 Ready to build the backbone of India's next-generation conversational AI platform? Join us as a Founding Backend Python Developer at Vakta.AI and architect systems that will power millions of intelligent conversations. At Vakta.AI , we're developing India's most advanced conversational AI infrastructure — from high-performance API gateways to real-time chat engines and scalable AI model serving platforms. We're building the robust foundation that makes seamless human-AI interaction possible at massive scale. Why this is an exceptional opportunity: You'll be a founding team member , working directly with our core team to design the entire backend architecture and technical infrastructure from the ground up. Competitive salary + significant ESOPs — we want you to grow with us and share in our success story. You'll solve complex distributed systems challenges that directly enable breakthrough AI experiences for millions of users. Tremendous growth potential, complete ownership, and the chance to build and mentor your own backend engineering team as we expand. What we're looking for: Strong expertise in Python backend development (Django/FastAPI/Flask, async programming, microservices) Experience with databases (PostgreSQL, MongoDB, Redis), message queues, and distributed systems Someone who loves building robust, scalable systems and can architect production-ready solutions from scratch A startup mindset: proactive, solution-oriented, and excited about tackling complex technical challenges Bonus points for: Experience with cloud platforms (AWS/GCP/Azure), containerization (Docker/Kubernetes) Knowledge of API design, WebSocket implementations, and real-time systems Understanding of ML model deployment and serving infrastructure Additional details: Great salary + equity (ESOPs) — competitive package, negotiable for the right candidate We need someone who can join immediately (or very soon!) Location: Hybrid/Remote with flexibility

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Engineering team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a skilled and passionate Senior Software Engineer with expertise in Python and React to join our development team. As a Senior Fullstack Developer, you will be a crucial member of our development team, responsible for leading and driving the development of complex, scalable, and high-performance Python-based applications. One of your main focus will be on developing and supporting efficient, reusable and highly scalable APIs & components to deliver a compelling experience to users across platforms. You will collaborate with cross-functional teams, mentor junior developers, and coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product is important. You will take part in the planning and strategy to come up with the solutions with full ownership. You will own the development and its quality independently and be responsible for high quality deliverables. And you will work with a great team with excellent benefits. Responsibilities & Skills You should: Be excited to work with talented, committed people in a fast-paced environment. Use a data-driven approach and actively work on product & technology roadmap at strategy level and day-to-day tactical level. Be designing, building, and maintaining high performance responsive Web Applications and dashboards with reusable, and reliable code. Use a rigorous approach for product improvement and customer satisfaction. Love developing great software as a seasoned product engineer. Be ready, able, and willing to jump onto a call with a partner or customer to help solve problems. Be able to deliver against several initiatives simultaneously. Have a strong eye for detail and quality of code. Have an agile mindset. Have strong problem-solving skills and attention to detail. Ability to understand business requirements and translate them into technical requirements Ability to deliver against several initiatives simultaneously as a multiplier. Required Skills (Python) You are an experienced developer – a minimum of 7+ years of professional experience. Python experience, preferably both 2.7 and 3.x Strong Python knowledge - familiar with OOPs, data structures and algorithms Work experience & strong proficiency in Python and its associated frameworks (like Flask, FastAPI etc.). Experience in designing and implementing scalable microservice architecture Familiarity with RESTful APIs and integration of third-party APIs. 3+ years building and managing APIs to industry-accepted RESTful standards Demonstrable experience with writing unit and functional tests Application of industry security best practices to application and system development Experience with database systems such as PostgreSQL, MySQL, or MongoDB. Required Skills (React) React experience, preferably React 15 or higher, 2+ years. Thorough understanding of React.js and its core principles Familiarity with newer specifications of ECMAScript Experience with popular React.js workflows (such as Flux or Redux) Experience with modern front-end build pipelines and tools such as Babel, Webpack, NPM, etc. A knack for benchmarking and optimization Demonstrable experience with writing unit and functional tests Required The following experiences are not required, but you'll stand out from other applicants if you have any of the following, in our order of importance: Experience with cloud infrastructure like AWS/GCP or other cloud service provider experience Serverless architecture, preferably AWS Lambda Experience with PySpark, Pandas, Scipy, Numpy libraries is a plus Experience in microservices architecture Solid CI/CD experience You are a Git guru and revel in collaborative workflows You work on the command line confidently and are familiar with all the goodies that the linux toolkit can provide Knowledge of modern authorization mechanisms, such as JSON Web Token Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

Posted 3 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Application Developer – Open Source (Python Developer) Experience: 4 to 10 Years Location: Chennai, India Job Type: Full-Time Job Summary We are seeking a skilled Python Developer with strong knowledge and hands-on experience in designing, building, and maintaining scalable, secure, and high-performance applications. The ideal candidate should be comfortable working in a fast-paced environment and collaborating with cross-functional teams. Key Responsibilities Design, develop, test, and deploy Python-based applications. Build and maintain scalable and secure backend services and APIs. Optimize performance and ensure high availability and responsiveness. Collaborate with product managers, designers, and other developers to deliver quality software. Write clean, maintainable, and efficient code following best practices. Participate in code reviews and contribute to team knowledge sharing. Must-Have Skills Strong programming skills in Python Experience with one or more Python frameworks (e.g., Django, Flask, FastAPI) Knowledge of RESTful APIs and integration practices Familiarity with version control systems like Git Understanding of security, performance tuning, and scalability principles Good-to-Have Skills Experience with containerization tools (e.g., Docker) Exposure to CI/CD pipelines Knowledge of relational and/or NoSQL databases Familiarity with cloud platforms (e.g., AWS, Azure, GCP) Preferred Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field Strong problem-solving skills and the ability to work independently and collaboratively

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position - Python Developer Experience - 4 -6 Years Job Location - Noida Sector 126 Roles and Responsibilities- Define scalable event-driven pipelines capable of handling lakhs to millions of messages/minutes using Kafka Develop and optimize Python services using Flask, FastAPI (or Django). Build REST APIs, CLI tools, and components interfacing with legacy systems (e.g., AMOS, CORBA). High-Volume Event Processing - Design, implement, and optimize Kafka producers/consumers, manage clusters, topics, partitions, and schema registries Ensure pipeline reliability, performance, and durable stream handling. Database Design & Optimization Architect PostgreSQL schemas optimized for concurrency, partitioning, and real-time analytics. Troubleshoot performance bottlenecks and optimize query execution. Mentorship & Leadershi p Mentor junior developers Define engineering best practices and set standards across the team. Required Qualifications 4+ years of Python experience building production services. Proficiency with Flask, FastAPI, or Django. Mandatory deep experience with Apache Kafka . Strong PostgreSQL skills: schema design, indexing, partitioning, high concurrency. Strong debugging, profiling, and optimization skills. Proven track record of mentoring junior team members. Good communication, collaboration, and documentation habits. Preferred Skils Experience with RabbitMQ or Celery. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra) Knowledge of stream-processing frameworks (e.g., Kafka Streams, Spark, Flink). Candidate from the Telecom industry/project is preferred

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Job Summary: We are looking for a highly skilled Data Scientist (LLM) to join our AI and Machine Learning team. The ideal candidate will have a strong foundation in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs) , along with hands-on experience in building and deploying conversational AI/chatbots . The role requires expertise in LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . You will work closely with cross-functional teams to drive the development and enhancement of AI-powered applications. Key Responsibilities: Develop, fine-tune, and deploy Large Language Models (LLMs) for various applications, including chatbots, virtual assistants, and enterprise AI solutions. Build and optimize conversational AI solutions with at least 1 year of experience in chatbot development. Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . Design and develop ML/DL-based models to enhance natural language understanding capabilities. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc. for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Research and stay up to date with the latest advancements in LLM architectures, frameworks, and AI trends . Requirements Required Skills & Qualifications: 3-5 years of experience in Machine Learning (ML), Deep Learning (DL), and NLP-based model development. Hands-on experience in developing and deploying conversational AI/chatbots is Plus Strong proficiency in Python and experience with ML/DL frameworks such as TensorFlow, PyTorch, Hugging Face Transformers . Experience with LLM agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph . Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) and embedding models . Understanding of Prompt Engineering and Fine-tuning LLMs . Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. Experience in working with APIs, Docker, FastAPI for model deployment. Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Good to Have: Experience with Multi-modal AI models (text-to-image, text-to-video, speech synthesis, etc.) . Knowledge of Knowledge Graphs and Symbolic AI . Understanding of MLOps and LLMOps for deploying scalable AI solutions. Experience in automated evaluation of LLMs and bias mitigation techniques . Research experience or published work in LLMs, NLP, or Generative AI is a plus. Why Join Us? Opportunity to work on cutting-edge LLM and Generative AI projects . Collaborative and innovative work environment. Competitive salary and benefits. Career growth opportunities in AI and ML research and development.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

🚀 Hiring: #Dot_NET_Full_Stack_Developer 📍 Location: Hyderabad, India 📅 Experience: 4 – 6 Years 📧 Interested candidates can send their resumes to: ayesha@coretek.io We are looking for a passionate and skilled .NET Full Stack Developer to join our team! Key Skills Required: ✅#DotNET_Full_Stack_Development ✅#Python_server-side_scripting and #API_development ✅#Python_Frameworks: # Flask , # Django , or # FastAPI ✅ # Azure_Cloud development ✅ # JavaScript , # HTML , and # CSS #DotNet #FullStackDeveloper #HyderabadJobs #PythonDeveloper #AzureDeveloper #Flask #Django #FastAPI #JavaScriptDeveloper #FrontendDeveloper #BackendDeveloper #NowHiring #ITJobs #TechJobsIndia #HiringAlert #CoretekJobs

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Software Engineer (Next.js+FastAPI) Job Type: Full-Time, Contractor Location: Pune / Gurugram About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We're looking for an experienced Software Engineer with strong hands-on expertise in Next.js and FastAPI to join our growing engineering team. In this role, you will take end-to-end ownership of features, work across the frontend and backend stack, and collaborate closely with product and design teams. If you're passionate about crafting performant web applications, managing complex tasks effectively, and communicating your ideas clearly — we’d love to hear from you. Key Responsibilities: Design, develop, and deploy modern web applications using Next.js (React) and FastAPI. Build scalable APIs and backend services with performance and maintainability in mind. Translate product requirements into high-quality, testable, and maintainable code. Manage project tasks, timelines, and priorities with minimal supervision. Collaborate with designers, product managers, and fellow engineers to deliver impactful user experiences. Conduct code reviews, identify and fix bugs, and help maintain a high standard of code quality. Stay current with emerging trends in full-stack development and propose improvements proactively. Required Skills and Qualifications: 7+ years of relevant full-stack development experience. Strong proficiency in Next.js, React, and modern JavaScript/TypeScript. Hands-on experience with FastAPI, Python, and asynchronous backend patterns. Solid knowledge of RESTful APIs, microservices, and modern software architecture. Ability to manage tasks independently and communicate clearly with stakeholders. Excellent problem-solving skills and a bias for action. Strong verbal and written communication abilities. Preferred Qualifications: Experience working with cloud infrastructure (AWS, GCP, or Azure). Familiarity with Docker, CI/CD pipelines, and scalable deployment workflows. Previous experience in a leadership, mentoring, or tech lead role.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚀 AI Engineering Intern (SDE) – Founding Tech Interns | Opportunity of a Lifetime Location: Gurgaon (In-Office) Duration: 3–6 months (Flexible based on academic schedule) Start Date: Immediate openings Open to: Tier 1 college students graduating in 2025 and 2026 Compensation: Stipend + Pre-Placement Offer potential 🧠 About Us – Darwix AI Darwix AI is on a mission to solve a problem no one's cracked yet — building real-time, multilingual conversational intelligence for omnichannel enterprise sales teams using the power of Generative AI. We're building India’s answer to Gong + Refract + Harvey AI — trained on 1M+ hours of sales conversations, and packed with industry-first features like live agent coaching, speech-to-text in 11 Indic languages, and autonomous sales enablement nudges. We’ve got global clients, insane velocity, and a team of ex-operators from IIMs, IITs, and top-tier AI labs. 🌌 Why This Internship is Unlike Anything Else Work on a once-in-a-decade problem — pushing the boundaries of GenAI + Speech + Edge compute. Ship real products used by enterprise teams across India & the Middle East. Experiment freely — train models, optimize pipelines, fine-tune LLMs, or build scrapers that work in 5 languages. Move fast, learn faster — direct mentorship from the founding engineering and AI team. Proof-of-excellence opportunity — stand out in every future job, B-school, or YC application. 💻 What You'll Do Build and optimize core components of our real-time agent assist engine (Python + FastAPI + Kafka + Redis). Train, evaluate, and integrate whisper, wav2vec, or custom STT models on diverse datasets. Work on LLM/RAG pipelines, prompt engineering, or vector DB integrations. Develop internal tools to analyze, visualize, and scale insights from conversations across languages. Optimize for latency, reliability, and multilingual accuracy in dynamic customer environments. 🌟 Who You Are Pursuing a B.Tech/B.E. or dual degree from IITs, IIITs, BITS, NIT Trichy/Warangal/Surathkal, or other Tier-1 institutes , preferably in Computer Science or allied fields . Comfortable with Python, REST APIs, and database operations. Bonus: familiarity with FastAPI, Langchain, or HuggingFace. Passionate about AI/ML, especially NLP, GenAI, ASR, or multimodal systems. Always curious, always shipping, always pushing yourself beyond the brief. Looking for an internship that actually matters — not one where you're just fixing CSS. 🌐 Tech You’ll Touch Python, FastAPI, Kafka, Redis, MongoDB, Postgres Whisper, Deepgram, Wav2Vec, HuggingFace Transformers OpenAI, Anthropic, Gemini APIs LangChain, FAISS, Pinecone, LlamaIndex Docker, GitHub Actions, Linux environments 🎯 What’s in it for you A pre-placement offer for the best performers. A chance to be a founding engineer post-graduation. Exposure to the VC ecosystem, client demos, and GTM strategies. Stipend + access to tools/courses/compute resources you need to thrive. 🚀 Ready to Build the Future? If you’re one of those rare folks who can combine deep tech with deep curiosity, this is your call to adventure. Join us in building something that’s never been done before. Apply now at careers@cur8.in Attach your CV + GitHub/Portfolio + a line on why this excites you. Bonus points if you share a project you’ve built or an AI problem you’re obsessed with. Darwix AI | GenAI for Revenue Teams | Built from India for the World

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Exp - 5 to 10 Years Responsibilities: • Design, develop, and implement well-tested, reusable, and maintainable Python code. • ⁠Utilize various Python libraries and frameworks (e.g., FastAPI, Django, Flask, Pandas, NumPy) to implement functionalities. • ⁠Integrate various data sources (APIs, databases) to manipulate and analyze data. • Optimize code for performance, scalability, and security. • ⁠Write unit and integration tests for code coverage and stability. • ⁠Collaborate with designers and other developers to translate requirements into efficient solutions. • Participate in code reviews, providing constructive feedback to improve code quality. • ⁠Stay up-to-date with the latest Python trends, libraries, and best practices. • Debug and troubleshoot complex issues to ensure optimal application performance. • Proactively suggest improvements and optimizations to existing codebase.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position: AI + Backend Engineer Location: Gurugram Experience Required: 7–8+ years We are looking for a highly capable and innovative AI + Backend Engineer to join our dynamic team in Gurugram. The ideal candidate should bring in-depth experience working with Large Language Models (LLMs), Natural Language Processing (NLP), and intelligent agent frameworks, combined with solid backend development skills. This role requires a strong grasp of machine learning operations, API-centric architectures, and distributed systems. Role & Responsibilities: Design and develop advanced AI solutions leveraging LLMs for tasks like content generation, entity extraction, summarization, and intelligent agent development. Fine-tune and implement models for dense retrieval, question answering, and semantic search use cases. Build and maintain high-performance backend systems and RESTful APIs that support AI model integration. Use vector databases and embedding-based retrieval methods to enhance model outputs (e.g., Pinecone, FAISS, Weaviate). Apply Retrieval-Augmented Generation (RAG) frameworks to support dynamic and context-aware responses. Work closely with ML engineers and researchers to streamline model training, deployment, and optimization processes. Monitor and enhance the performance of deployed models and inference pipelines in production environments. Set up and manage end-to-end data pipelines for preprocessing, training, and deployment workflows. Stay abreast of the latest research and industry developments in the field of LLMs, NLP, and AI agents to continuously improve solutions. Key Requirements: 7–8+ years of experience in AI/ML systems development and backend engineering, with a strong focus on NLP and LLM-based systems. Expertise in Python and deep learning libraries such as PyTorch, TensorFlow, and Hugging Face Transformers. Strong experience in model fine-tuning, prompt engineering, and building intelligent agent-based applications. Practical knowledge of working with vector databases and retrieval mechanisms. Proficiency in building and scaling backend platforms using frameworks like FastAPI, Flask, or Django. Solid experience with cloud platforms like AWS, Google Cloud Platform, or Azure for AI deployment. Skilled in using Docker, Kubernetes, and related tools for model containerization and orchestration. Familiarity with MLOps practices, CI/CD pipelines, and model version control. Excellent analytical, debugging, and communication skills, with the ability to work effectively in agile, cross-functional teams.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bangalore North Rural, Karnataka, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is redefining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plentiful, and we equip you with the tools to seize them. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest, and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset-based securitization Spocto - Debt recovery & risk mitigation platform Accumn - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals, and Predictions to Lenders, Investors, and Business Enterprises So far, we have onboarded over 17,000+ enterprises and 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed, and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come join the club to be a part of our epic growth story. About The Job Job Title: Data Scientist 2 (LLM/GenAI) Location: Bangalore Experience: 2 - 4 years Employment Type: Full-time Job Summary: We seek a highly skilled Data Scientist (LLM) to join our AI and Machine Learning team. The ideal candidate will have a strong foundation in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs) , along with hands-on experience in building and deploying conversational AI/chatbots . The role requires expertise in LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . You will work closely with cross-functional teams to drive the development and enhancement of AI-powered applications. Key Responsibilities: Develop, fine-tune, and deploy Large Language Models (LLMs) for various applications, including chatbots, virtual assistants, and enterprise AI solutions. Build and optimize conversational AI solutions with at least 1 year of experience in chatbot development. Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . Design and develop ML/DL-based models to enhance natural language understanding capabilities. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc., for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Research and stay up-to-date with the latest advancements in LLM architectures, frameworks, and AI trends . Requirements Required Skills & Qualifications: 2-4 years of experience in Machine Learning (ML), Deep Learning (DL), and NLP-based model development. Hands-on experience in developing and deploying conversational AI/chatbots is Plus Strong proficiency in Python and experience with ML/DL frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers . Experience with LLM agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph . Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) and embedding models . Understanding of Prompt Engineering and Fine-tuning LLMs . Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. Experience in working with APIs, Docker, FastAPI for model deployment. Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Good to Have: Experience with Multi-modal AI models (text-to-image, text-to-video, speech synthesis, etc.) . Knowledge of Knowledge Graphs and Symbolic AI . Understanding of MLOps and LLMOps for deploying scalable AI solutions. Experience in automated evaluation of LLMs and bias mitigation techniques . Research experience or published work in LLMs, NLP, or Generative AI is a plus. Benefits Why Join Us? This is an opportunity to work on cutting-edge LLM and Generative AI projects . Collaborative and innovative work environment. Competitive salary and benefits. Career growth opportunities in AI and ML research and development.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chandigarh

On-site

Role Overview We are looking for a Python AI/ML Developer with a passion for Large Language Models (LLMs), rapid prototyping, and scalable backend development. If you’re excited about building AI-native applications using frameworks like FastAPI, Django, Gradio, and Streamlit—and you’ve dabbled in digital or affiliate marketing—we want to hear from you. Key Responsibilities Build and deploy AI-powered applications using Python and LLM APIs (e.g., OpenAI, LLaMA, Mistral, etc.) Develop RESTful and asynchronous APIs using FastAPI and Django Integrate and manage databases like PostgreSQL and MongoDB Create intuitive frontend UIs with Gradio and Streamlit Design and fine-tune prompts for various LLM use cases (text generation, classification, semantic search, etc.) Collaborate with product, design, and marketing teams to translate ideas into production-ready tools Bonus: Apply your understanding of digital marketing funnels or affiliate campaigns in product design Requirements Strong foundation in Python and basic understanding of AI/ML workflows Exposure to one or more LLM APIs (OpenAI, Cohere, Hugging Face models, etc.) Working knowledge of FastAPI , Django , PostgreSQL , and MongoDB Experience or project work using Gradio and/or Streamlit Demonstrated ability in writing effective prompts and building LLM chains or workflows Strong problem-solving mindset and eagerness to learn Git and version control proficiency Bonus Points Knowledge in digital marketing , affiliate marketing , or performance marketing Knowledge of LangChain , LLMOps , or RAG-based pipelines Job Type: Full-time Work Location: In person

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

PwC AC is hiring for Data scientist Apply and get a chance to work with one of the Big4 companies #PwC AC. Job Tit le : Data scientist Years of Experienc e: 3-7 years Shift Timin gs: 11AM-8PM Qualificati on: Graduate and above(Full time) About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 3 to 7 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI , including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Good to Have Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems. Soft Skills & Team Expectations Strong written and verbal communication; able to explain complex models to business stakeholders. Ability to independently document work, manage requirements, and self-drive technical discovery. Desire to innovate, improve, and automate existing processes and solutions. Active contributor to team knowledge sharing, technical forums, and innovation drives. Strong interpersonal skills to build relationships across cross-functional teams. A mindset of continuous learning and technical curiosity. Preferred Certifications (at least two are preferred) Certifications in Machine Learning, Deep Learning, or Natural Language Processing. Python programming certifications (e.g., PCEP/PCAP). Cloud certifications (Azure/AWS/GCP) such as Azure AI Engineer, AWS ML Specialty, etc. Why Join PwC CTIO? Be part of a mission-driven AI innovation team tackling industry-wide transformation challenges. Gain exposure to bleeding-edge GenAI research, rapid prototyping, and product development. Contribute to a diverse portfolio of AI solutions spanning pharma, finance, and core business domains. Operate in a startup-like environment within the safety and structure of a global enterprise. Accelerate your career as a deep tech leader in an AI-first future.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Work Location: In person

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Delhi

On-site

About us Bain & Company is a global consultancy that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with The Consumer Products Center of Expertise collaborates with Bain’s global Consumer Products Practice leadership, client-facing Bain leadership and teams, and with end clients on development and delivery of Bain’s proprietary CP products and solutions. These solutions aim to answer strategic questions of Bain’s CP clients relating to brand strategy (consumer needs, assortment, pricing, distribution), revenue growth management (pricing strategy, promotions, profit pools, trade terms), negotiation strategy with key retailers, optimization of COGS etc. You will work as part of the team in CP CoE comprising of a mix of Director, Managers, Projects Leads, Associates and Analysts working to implement cloud-based end-to-end advanced analytics solutions. Delivery models on projects vary from working as part of a CP Center of Expertise, broader global Bain case team within the CP ringfence, or within other industry CoEs such as FS / Retail / TMT / Energy / CME / etc with BCN on need basis The AS is expected to have a knack for seeking out challenging problems and coming up with their own ideas, which they will be encouraged to brainstorm with their peers and managers. They should be willing to learn new techniques and be open to solving problems with an interdisciplinary approach. They must have excellent coding skills and should demonstrate a willingness to write modular, reusable, and functional code. What you’ll do Collaborate with data scientists working with Python, LLMs, NLP, and Generative AI to design, fine-tune, and deploy intelligent agents and chains-based applications. Develop and maintain front-end interfaces for AI and data science applications using React.js / Angular / Nextjs and/or Streamlit/ DASH, enhancing user interaction with complex machine learning and NLP-driven systems. Build and integrate Python-based machine learning models with backend systems via RESTful APIs using frameworks like FastAPI / Flask or Django. Translate complex business problems into scalable technical solutions, integrating AI capabilities with robust backend and frontend systems. Assist in the design and implementation of scalable data pipelines and ETL workflows using DBT, PySpark, and SQL, supporting both analytics and generative AI solutions. Leverage containerization tools like Docker and utilize Git for version control, ensuring code modularity, maintainability, and collaborative development. Deploy ML-powered and data-driven applications on cloud platforms such as AWS or Azure, optimizing for performance, scalability, and cost-efficiency. Contribute to internal AI/ML Ops platforms and tools, streamlining model deployment, monitoring, and lifecycle management. Create dashboards, visualizations, and presentations using tools like Tableau/ PowerBI, Plotly, and Seaborn to drive business insights. Proficient with Excel, and PowerPoint by showing proficiency in business communication through stakeholder interactions. About you A Master’s degree or higher in Computer Science, Data Science, Engineering, or related fields OR Bachelor's candidates with relevant industry experience will also be considered. Proven experience (2 years for Master’s; 3+ years for Bachelor’s) in AI/ML, software development, and data engineering. Solid understanding of LLMs, NLP, Generative AI, chains, agents, and model fine-tuning methodologies. Proficiency in Python, with experience using libraries such as Pandas, Numpy, Plotly, and Seaborn for data manipulation and visualization. Experience working with modern Python frameworks such as FastAPI for backend API development. Frontend development skills using HTML, CSS, JavaScript/TypeScript, and modern frameworks like React.js; Streamlit knowledge is a plus. Strong grasp of data engineering concepts – including ETL pipelines, batch processing using DBT and PySpark, and working with relational databases like PostgreSQL, Snowflake etc. Good working knowledge of cloud infrastructure (AWS and/or Azure) and deployment best practices. Familiarity with MLOps/AI Ops tools and workflows including CI/CD pipelines, monitoring, and container orchestration (with Docker and Kubernetes). Good-to-have: Experience in BI tools such as Tableau or PowerBI, Good-to-have: Prior exposure to consulting projects or CP (Consumer Products) business domain. What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 3 weeks ago

Apply

5.0 - 6.0 years

8 - 15 Lacs

India

On-site

We are seeking a highly skilled Python Developer with expertise in Machine Learning and Data Analytics to join our team. The ideal candidate should have 5-6 years of experience in developing end-to-end ML-driven applications and handling data-driven projects independently. You will be responsible for designing, developing, and deploying Python-based applications that leverage data analytics, statistical modeling, and machine learning techniques. Key Responsibilities: Design, develop, and deploy Python applications for data analytics and machine learning. Work independently on machine learning model development, evaluation, and optimization. Develop ETL pipelines and process large-scale datasets for analysis. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Strong expertise in Machine Learning, Data Analytics, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹800,000.00 - ₹1,500,000.00 per year Schedule: Day shift Ability to commute/relocate: Chandrasekharpur, Bhubaneswar, Orissa: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python: 5 years (Required) Work Location: In person Expected Start Date: 01/08/2025

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. Key Skills & Technology Areas: AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) Observability Integration: Splunk, ELK Stack, Prometheus. DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: Proven ability to work independently and collaboratively in agile, innovation-driven teams. Strong problem-solving mindset and product-oriented thinking. Excellent communication and technical storytelling skills. Flexibility to work across multiple opportunities based on business priorities. Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI

Posted 3 weeks ago

Apply

1.0 - 3.0 years

60 - 96 Lacs

Pune

Work from Office

Responsibilities: Exp Machine Learning ,NLP, Generative AI, LLm ,API ,django . Develop CRM solutions with Hubspot & FastAPI/Flask frameworks. Design, develop & maintain RAG applications on Cloud platforms. Deploy Model Context Protocol server

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Please find the JD for Senior Full Stack Engineer ( Python + MERN ) :- Location: Noida Company Profile : MeeTri is a global provider of Information Technology services and business solutions. We leverage deep industry and functional expertise, leading technology practices to help clients transform their highest-value business processes and improve their business performance. MeeTri is led by a team of seasoned executives with extensive experience, industry knowledge, and technology expertise. Our management team is committed to excellence in customer satisfaction and technical innovation and partnering with best-of-breed technology and distribution partners. Our vision is to achieve global IT services leadership in providing value-added high-quality IT solutions to our clients in selected horizontal and vertical segments, by combining technology skills, domain expertise, process focus, and a commitment to long-term client relationships. Job Description: Strong experience with Python (Flask, FastAPI) for backend development, building efficient APIs (REST/GraphQL). Optimize backend performance using AsyncIO, multithreading, and multiprocessing techniques. Lead technical aspects of projects, including architecture design and technology stack decisions. Develop modern web applications using MERN stack (MongoDB, Express.js, React.js, Node.js). Mentor and guide junior developers, conduct code reviews, and enforce best practices. Work with PostgreSQL and MongoDB for database management, optimization, and design. Deploy and manage applications using AWS (EC2, Lambda, S3, RDS). Implement CI/CD pipelines and automation for smooth deployment and continuous integration. Collaborate with UX/UI designers to create intuitive and responsive user interfaces. Participate in Agile development processes, including sprint planning and retrospectives. Ensure high application performance, scalability, and security across both frontend and backend. Implement cloud-based solutions for high availability and disaster recovery. Skills: Proficient in Python, Node.js, React.js, Express.js, Flask, and Django. Experience with PostgreSQL, MongoDB, and database optimization. Expertise in AsyncIO, multithreading, multiprocessing for concurrency. Familiarity with AWS services (EC2, Lambda, S3, RDS). Experience in Git, CI/CD tools, and version control systems. Ability to lead teams, mentor junior developers, and make technical decisions. Strong problem-solving and debugging skill

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

We are hiring a Lead, Data Engineer to join our team. At Kroll, we are building a strong Data practice with artificial intelligence, machine learning practice and analytics, and we’re looking for you to join our growing portfolio. You will be involved in designing, building, and integrating data from various sources and working with an advanced engineering team and professionals from the world’s largest financial institutions, law enforcement, and government agencies. The day-to-day responsibilities include but not limited to: - Design and build organizational data infrastructure and architecture - Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes for data delivery. - Choose the best tools/services/resources to build robust data pipelines for data ingestion, connection, transformation, and distribution - Design, develop and manage ELT applications. - Working with global teams to deliver fault tolerant, high quality data pipelines Requirements: - Advanced Experience writing ETL/ELT jobs - Advanced Experience with Azure, AWS and Databricks Platform (Mostly data related services) - Advanced Experience with Python, Spark ecosystem (PySpark + Spark SQL), SQL database - Ability to develop REST APIs, Python SDKs or Libraries, Spark Jobs, etc - Proficiency in using open-source tools, frameworks, python libraries like FastAPI, Pydantic, Polars, Pandas, PySpark, Deltalake Tables, Docker, Kubernetes, etc - Experience in Lakehouse & Medallion architecture, Data Governance, Data Pipeline Orchestration - Excellent communication skills - Ability to conduct data profiling, cataloging, and mapping for technical data flows - Ability to work with an international team Desired Skills: - Strong cloud architecture principles: compute, storage, networks, security, cost savings, etc. - Advanced SQL and Saprk query/data pipeline performance tuning skills. - Experience and knowledge of building Lakehouse using technologies including Azure Databricks, Azure Data Lake, SQL, PySpark etc. - Programing paradigm like OOPPs, Async programming, Batch processing - Knowledge of CI/CD, Git About Kroll In a world of disruption and increasingly complex business challenges, our professionals bring truth into focus with the Kroll Lens. Our sharp analytical skills, paired with the latest technology, allow us to give our clients clarity—not just answering all areas of business. We value the diverse backgrounds and perspectives that enable us to think globally. As part of One team, One Kroll, you’ll contribute to a supportive and collaborative work environment that empowers you to excel. Kroll is the premier global valuation and corporate finance advisor with expertise in complex valuation, disputes and investigations, M&A, restructuring, and compliance and regulatory consulting. Our professionals balance analytical skills, deep market insight and independence to help our clients make sound decisions. As an organization, we think globally—and encourage our people to do the same. Kroll is committed to equal opportunity and diversity, and recruits people based on merit. In order to be considered for a position, you must formally apply via careers.kroll.com

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing Multi-Agent & Agentic RAG workflows in production. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help build an embedded AI CoPilot across the different products at NetSkope What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Drive the end-to-end development and deployment of CoPilot, an embedded assistant, powered by cutting-edge Multi-Agent Workflows. This will involve designing and implementing complex interactions between various AI agents & tools to deliver seamless, context-aware assistance across our product suite Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps & LLMOps best practices to deploy and monitor machine learning models & agentic workflows in production. Implement comprehensive evaluation and observability strategies for the CoPilot Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Collaborate with cloud architects and security analysts to develop cloud-native security solutions x platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Has built & deployed a multi-agent or agentic RAG workflow in production. Expertise in prompt engineering patterns such as chain of thought, ReAct, zero/few shot. Experience in Langgraph/Autogen/ AWS Bedrock/ Pydantic AI/ Crew AI Strong understanding of MLOps practices and tools (e.g., Sagemaker/MLflow/ Kubeflow/ Airflow/ Dagster). Experience with evaluation & observability tools like Langfuse/ Arize Phoenix/ Langsmith. Data Engineering Proficiency in working with vector databases such as PGVector, Pinecone, and Weaviate. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Software Engineering Expertise in Python with experience in one other language (C++/Java/Go) for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Experience of building & consuming MCP clients & servers. Experience with asynchronous programming, including web-sockets, FastAPI, and Sanic. Good-to-Have Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like Pytorch, TensorFlow and Scikit-learn. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Graph database knowledge is a plus. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

India

Remote

Ekyam.ai: Integrating Systems, Unleashing Intelligence - Join Our India Expansion! Are you ready to solve complex integration challenges and build the next generation of AI-driven retail technology? Ekyam.ai, headquartered in New York, US, is expanding globally and establishing its new team in India! We are looking for talented individuals like you to be foundational members of our Indian presence. Ekyam.ai is developing a groundbreaking AI-native middleware platform that connects disparate retail systems (ERP, OMS, WMS, POS, etc.) and creates a unified, real-time, vectorized data layer. We enable intelligent automation and transform how retailers leverage their data by integrating cutting-edge AI capabilities. Role We are seeking an experienced AI Developer (4–6 years) skilled in applying Large Language Models (LLMs) and building AI-driven applications to join our growing team. A significant part of this role involves designing and developing AI Agents within our platform with an initial focus on integrating external LLM APIs (e.g., OpenAI, Anthropic, Google) via sophisticated prompt engineering and RAG techniques into these agents, built using Python + FastAPI . You will architect the logic for these agents, enabling them to perform complex tasks within our e-commerce and retail data orchestration pipelines. Furthermore, as Ekyam.ai evolves, this role offers the potential to grow into customizing and deploying LLMs in-house , so adaptability and a strong foundation in ML/LLM principles are key. Key Responsibilities AI Agent Development: Design, develop, test, and maintain the core logic for AI Agents within FastAPI services. Orchestrate agent tasks, manage state, interact with platform data/workflows, and integrate LLM capabilities. LLM API Integration & Prompt Engineering: Integrate with external LLM provider APIs . Design, implement, and rigorously test effective prompts for diverse retail-specific tasks (generation, Q&A, summarization). RAG Implementation: Implement and optimize Retrieval-Augmented Generation (RAG) patterns using vector databases to provide relevant context to LLM API calls made by agents. FastAPI Microservice Development: Build and maintain the scalable FastAPI microservices that host AI Agent logic and handle interactions with LLMs and other platform components in a containerized environment ( Docker, Kubernetes ). Data Processing for AI: Prepare and preprocess data required for effective prompt context, RAG retrieval, and potentially for future fine-tuning tasks. Collaboration & Future Adaptation: Work with cross-functional teams to deliver AI features. Stay updated on LLM advancements and be prepared to learn and contribute to potential future in-house LLM fine-tuning and deployment efforts. Required Skills & Qualifications 3–6 years of hands-on experience in software development with a strong focus on AI/ML application development . Demonstrable experience integrating and utilizing external LLM APIs (e.g., OpenAI, Anthropic, Google) in applications. Proven experience with Prompt Engineering techniques. Strong Python programming skills. Practical experience building and deploying RESTful APIs using FastAPI . Experience designing and implementing application logic for AI-driven features or agents . Understanding and practical experience with RAG concepts and vector databases (Pinecone, FAISS, etc.). Solid understanding of core Machine Learning concepts and familiarity with frameworks like PyTorch, TensorFlow, or Hugging Face (important for understanding models and future adaptation). Familiarity with cloud platforms ( AWS, GCP, or Azure ) and containerization ( Docker, Kubernetes ) for application deployment. Solid problem-solving skills and clear communication abilities. Experience working effectively in an agile environment. Willingness and capacity to learn and adapt towards future work involving deeper LLM customization and deployment. Bachelor's or Master's degree in Computer Science, AI, or a related field. Ability to work independently and collaborate effectively in a remote setting. Preferred Qualifications Experience with frameworks like LangChain or LlamaIndex. Experience with observability and debugging tools for LLM applications, such as LangSmith. Experience with graph databases (e.g., Neo4j) and query languages (e.g., Cypher). Experience with MLOps practices, applicable to both current application monitoring and future model lifecycle management. Experience optimizing API call performance (latency/cost) or model inference. Knowledge of AI security considerations and bias mitigation . Why Join Ekyam.ai? Be a foundational member of our new India team! This role offers a unique blend: build intelligent AI Agents leveraging cutting-edge external LLMs today, while positioning yourself at the forefront of our future plans for deeper AI customization. You'll gain expertise across the AI application stack (APIs, RAG, Agents, potential future MLOps) and collaborate within a vibrant global team shaping the future of AI in e-commerce. We offer competitive compensation that values your current skills and growth potential.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Gurugram

Work from Office

We are looking for a passionate and skilled Fullstack Developer to join our growing team. Youll work on building intuitive, responsive web applications and scalable backend services using modern frameworks and cloud technologies. Responsibilities : Front End : - Design and develop responsive UIs using React.js, HTML, CSS, and JavaScript - Create wireframes and mockups using tools like Figma or Canva - Implement dynamic components and visualizations using Highcharts, Material UI, and Tailwind CSS - Ensure seamless REST API integration Middleware : - Develop and maintain middleware logic using FastAPI (or similar frameworks) - Work with Python for API logic and data processing - Containerize and manage services using Docker Back End : - Build and orchestrate data pipelines using Apache Airflow, Databricks, and PySpark - Write and optimize SQL queries for data analysis and reporting - Implement basic authentication using JWT or OAuth standards Requirements : - 3+ years of experience in fullstack or frontend/backend development - Strong hands-on with React.js, JavaScript, CSS, and HTML - Experience with Python, FastAPI, and Docker - Familiarity with cloud data tools like Google BigQuery - Exposure to authentication protocols (JWT/OAuth) Preferred : - Working knowledge of Node.js - Ability to collaborate in agile and cross-functional teams

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies