Uplevyl

19 Job openings at Uplevyl
Senior AI Engineer – Full Stack | AWS Architect | Pinecone Agentic AI Specialist Delhi,India 6 years None Not disclosed On-site Full Time

Senior AI Engineer – Full Stack | AWS Architect | Pinecone Agentic AI Specialist Job Summary: We are seeking a highly experienced Senior Full Stack Engineer with a multidisciplinary background to join our team. The ideal candidate will serve in multiple capacities—as an AWS Solutions Architect, Backend Developer (Node.js), Database Engineer, and Data Analyst/Engineer—and will bring deep expertise in agentic AI workflows and Pinecone vector databases. You’ll be responsible for designing and building secure, scalable systems, driving data-driven decisions, and integrating cutting-edge AI agentic workflows using Pinecone for personalized and contextual automation. A hands-on approach to architecture, performance tuning, data modeling, and team collaboration in a Scrum environment is essential. Key Responsibilities: Architecture & Engineering • Architect and develop scalable solutions using AWS cloud services (Lambda, API Gateway, Cognito, RDS, S3, etc.) • Lead the end-to-end backend development using Node.js (REST APIs, microservices) • Build and optimize relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) • Integrate Pinecone vector search with agentic workflows for semantic search, AI decisioning, and personalization Agentic AI & Pinecone • Implement and optimize agent-based workflows using orchestration tools (e.g., LangGraph, LangChain, custom DAGs) • Leverage Pinecone for retrieval-augmented generation (RAG), embedding search, and LLM context enrichment • Collaborate with AI engineers to support GenAI-driven product features Data Engineering & Analytics • Build and maintain data pipelines for structured/unstructured data (ETL/ELT) • Perform data analysis, cleansing, and transformation to support business insights • Optimize data storage, indexing, and access patterns for performance Scrum & Team Collaboration • Participate in daily stand-ups, sprint planning, retrospectives • Work collaboratively with frontend, AI, and product teams to deliver features on time • Contribute to technical documentation, testing, and code reviews Programming Languages & Technologies: • Backend: Node.js (JavaScript/TypeScript), Python (for scripting and AI integration) • Frontend (optional but preferred): React, Next.js • Cloud & DevOps: AWS (Lambda, Cognito, API Gateway, RDS, DynamoDB, S3, IAM, VPC), Docker, Git, CI/CD tools • Databases: PostgreSQL, DynamoDB, Redis • AI & Agentic Tools: Pinecone, LangChain, LangGraph, OpenAI APIs • Data Engineering: SQL, Python (Pandas, NumPy), ETL tools • Other Tools: REST APIs, GraphQL (optional), Terraform or CloudFormation (nice to have) Required Qualifications: • 6+ years of experience as a full stack/backend engineer with a focus on Node.js • 3+ years of experience as an AWS architect, deploying production systems • Proven experience with RDBMS/NoSQL databases and query optimization • Deep understanding of Pinecone, vector databases, and AI agent orchestration • Prior experience as a Data Engineer, including pipeline design, analytics, and modeling • Solid grasp of agentic architectures, RAG, LangChain/LangGraph, or equivalent frameworks • Strong understanding of Scrum/Agile methodologies • Excellent communication skills and ability to lead cross-functional discussions Preferred Skills: • Experience with embedding models, HuggingFace, or LLM fine-tuning • Familiarity with frontend systems (React/Next.js) to support end-to-end development • Knowledge of data privacy and compliance (GDPR, HIPAA, SOC 2)

Data Architect - AI-Powered Gender Intelligence System New Delhi,Delhi,India 5 years None Not disclosed On-site Full Time

About the Role: Architect the schema that will revolutionize gender data We're building the "source of truth" for gender intelligence—a platform that will do for gender insights what Bloomberg did for financial data. We need a visionary data architect to design the foundation. The Mission Create the definitive data architecture for structuring the world's gender-related information, enabling instant, contextual insights that currently take weeks of manual research. What You'll Design: A revolutionary gender-focused tagging schema and ontology that captures nuance across cultures, economies, and demographics Multi-dimensional data models linking disparate sources into coherent intelligence Privacy-first architecture meeting enterprise standards (think Harvey AI-level security) Scalable knowledge graphs connecting everything from consumer behavior to policy outcomes Real-time data versioning and lineage tracking systems You're Ideal If You Have: 5+ years designing complex data architectures for AI/ML systems Deep understanding of ontology design, knowledge graphs, and semantic modeling Experience with both structured and unstructured data at scale Track record of building GDPR/SOC2-compliant data systems Passion for creating order from chaos The Impact: Your architecture will become the backbone for decisions affecting women's participation in the $100 trillion global economy. From Nike designing products for women athletes in Vietnam to the World Bank allocating development funds, your schema will power it all. Join us at the intersection of AI, data, and social impact. This is a rare chance to define how the world structures and accesses gender intelligence for the next decade.

Data Annotator - Gender Intelligence Specialist Noida,Uttar Pradesh,India 0 years None Not disclosed On-site Full Time

About the Role: Help train the AI that will transform gender equality efforts worldwide We're building an AI system that understands gender dynamics better than any technology ever created. We need meticulous, thoughtful annotators to teach it. The Mission: You'll be the human intelligence behind our AI, ensuring it accurately captures the nuances of gender across cultures, economies, and contexts. Your work directly shapes how organizations worldwide will understand and serve women. What You'll Do: Tag and categorize gender-specific data from global sources using our proprietary schema Identify patterns and connections across datasets that others might miss Validate AI-suggested labels and improve tagging accuracy Document edge cases and cultural nuances in gender data interpretation Collaborate with domain experts to refine our understanding of gender metrics Perfect Candidates Have: Exceptional attention to detail and pattern recognition skills Interest in gender studies, economics, international development, or sociology Experience with data labeling tools (Labelbox, Prodigy) is a plus Ability to work with data in multiple languages is highly valued Genuine passion for advancing gender equality through better data Why Your Work Matters: Every dataset you annotate helps our AI provide better insights—whether it's helping a financial institution design products for women entrepreneurs or enabling a government to track gender equality progress. You're not just labeling data; you're teaching AI to see what human researchers often miss. This role offers the unique opportunity to combine meaningful mission-driven work with cutting-edge AI development. Competitive hourly rates, flexible scheduling, and the satisfaction of contributing to a platform that will impact millions.

Data Engineer - Gender Intelligence Platform Noida,Uttar Pradesh,India 3 years None Not disclosed On-site Full Time

About the Role: Join us in building the future of data-driven gender equality We're assembling a world-class team to create a groundbreaking AI platform that will transform how organizations understand and serve half the world's population. Think Harvey AI, but for one of the most important and overlooked domains in business intelligence. The Mission You'll be instrumental in building infrastructure that turns fragmented public data into actionable gender intelligence, helping Fortune 500 companies, governments, and NGOs make better decisions that impact billions of women worldwide. What You'll Do: Design and implement scalable data pipelines ingesting from 100+ global sources (World Bank, UN, OECD, national statistics offices) Build real-time data refresh systems that keep our intelligence current across 50+ countries Create robust ETL/ELT workflows handling diverse formats (APIs, CSVs, PDFs, unstructured text) Implement enterprise-grade data isolation and multi-tenancy architecture Optimize vector databases for lightning-fast retrieval of gender-segmented insights You're Perfect If You Have: 3+ years building production data pipelines at scale Experience with modern data stack (Airflow/Dagster, dbt, Snowflake/BigQuery) Strong Python/SQL skills and cloud expertise (AWS/GCP/Azure) Passion for using technology to drive social impact Curiosity about turning "messy" public data into gold Why This Matters: Every day, critical decisions affecting women's economic participation, health, and opportunities are made with incomplete data. Your work will power instant, accurate insights that could redirect billions in investment, shape better products, and create more equitable policies.

Data Architect - AI-Powered Gender Intelligence System Noida,Uttar Pradesh,India 5 years None Not disclosed On-site Full Time

About the Role: Architect the schema that will revolutionize gender data We're building the "source of truth" for gender intelligence—a platform that will do for gender insights what Bloomberg did for financial data. We need a visionary data architect to design the foundation. The Mission Create the definitive data architecture for structuring the world's gender-related information, enabling instant, contextual insights that currently take weeks of manual research. What You'll Design: A revolutionary gender-focused tagging schema and ontology that captures nuance across cultures, economies, and demographics Multi-dimensional data models linking disparate sources into coherent intelligence Privacy-first architecture meeting enterprise standards (think Harvey AI-level security) Scalable knowledge graphs connecting everything from consumer behavior to policy outcomes Real-time data versioning and lineage tracking systems You're Ideal If You Have: 5+ years designing complex data architectures for AI/ML systems Deep understanding of ontology design, knowledge graphs, and semantic modeling Experience with both structured and unstructured data at scale Track record of building GDPR/SOC2-compliant data systems Passion for creating order from chaos The Impact: Your architecture will become the backbone for decisions affecting women's participation in the $100 trillion global economy. From Nike designing products for women athletes in Vietnam to the World Bank allocating development funds, your schema will power it all. Join us at the intersection of AI, data, and social impact. This is a rare chance to define how the world structures and accesses gender intelligence for the next decade.

Data Architect - AI-Powered Gender Intelligence System Noida,Uttar Pradesh,India 5 years None Not disclosed On-site Full Time

About the Role: Architect the schema that will revolutionize gender data We're building the "source of truth" for gender intelligence—a platform that will do for gender insights what Bloomberg did for financial data. We need a visionary data architect to design the foundation. The Mission Create the definitive data architecture for structuring the world's gender-related information, enabling instant, contextual insights that currently take weeks of manual research. What You'll Design: A revolutionary gender-focused tagging schema and ontology that captures nuance across cultures, economies, and demographics Multi-dimensional data models linking disparate sources into coherent intelligence Privacy-first architecture meeting enterprise standards (think Harvey AI-level security) Scalable knowledge graphs connecting everything from consumer behavior to policy outcomes Real-time data versioning and lineage tracking systems You're Ideal If You Have: 5+ years designing complex data architectures for AI/ML systems Deep understanding of ontology design, knowledge graphs, and semantic modeling Experience with both structured and unstructured data at scale Track record of building GDPR/SOC2-compliant data systems Passion for creating order from chaos The Impact: Your architecture will become the backbone for decisions affecting women's participation in the $100 trillion global economy. From Nike designing products for women athletes in Vietnam to the World Bank allocating development funds, your schema will power it all. Join us at the intersection of AI, data, and social impact. This is a rare chance to define how the world structures and accesses gender intelligence for the next decade.

Technical Communication Specialist noida,uttar pradesh,india 0 years None Not disclosed Remote Full Time

Company: Uplevyl Location: Remote Employment Type: Full-time / Part-time About Uplevyl At Uplevyl, we are on a mission to transform the way women navigate their careers, lives, and personal growth. Our AI-powered platform combines cutting-edge technology with deep insights into gender intelligence to empower women with resources, mentorship, and opportunities for advancement. By integrating research, analytics, and community-driven content, Uplevyl strives to build a future where women lead with confidence and impact. Role Overview We are seeking a Technical Ghost Writer / Technical Communication Specialist with deep expertise in AI and technology. This role will shape Uplevyl’s thought leadership by crafting compelling, insightful, and authentic content that resonates with business, professional, and community audiences. You will work directly with our founder and leadership team, adapting their voice and perspective into high-quality written pieces that highlight Uplevyl’s mission, expertise, and vision. Key Responsibilities Technical Writing & Ghostwriting Translate complex technical concepts (AI bias, vectorization, emerging AI trends) into clear, business-accessible language. Ghostwrite thought leadership pieces (LinkedIn posts, blogs, reports, articles) in the authentic voice of Uplevyl’s leadership. Develop long-form reports and whitepapers (30–50 pages) and repurpose content into shorter formats. Content Strategy & Development Draft and publish engaging content across multiple platforms: blog posts, LinkedIn, newsletters, presentations, and reports. Design content reuse strategies to maximize reach and consistency across channels. Collaborate with PR, social, and design teams to ensure content aligns with brand positioning. Research & Insights Conduct research on AI and technology trends, especially those intersecting with gender diversity, leadership, and professional growth. Monitor industry developments through news, podcasts, communities, and company blogs. Synthesize insights into accessible, credible, and compelling narratives. Collaboration & Stakeholder Engagement Work closely with Uplevyl’s founder and leadership to capture authentic perspectives. Partner with distributed, global teams to align content with overall brand goals. Incorporate feedback and adapt content strategies as the role evolves. Qualifications & Skills Required Proven experience as a technical writer, content designer, or copywriter with focus on AI/tech. Ability to explain complex AI and technology topics to non-technical, business audiences. Strong ghostwriting ability to adapt to executive voice and tone. Experience creating both long-form (reports, whitepapers) and short-form content (blogs, social posts). Skilled in independent research, trend analysis, and synthesizing insights into accessible content. Familiarity with content management systems (WordPress) and collaboration tools (Confluence, Jira, AEM, Figma). Strong time management, flexibility, and ability to meet deadlines. Comfort working with remote, globally distributed teams. Preferred / Nice to Have B.Tech or equivalent technical degree. Experience across multiple industries and audiences. Background in gender diversity, leadership, or professional communities. Willingness to work flexible hours to accommodate international collaboration. What We Offer Opportunity to shape Uplevyl’s voice in the AI and gender intelligence space. A mission-driven environment committed to women’s empowerment and leadership. Flexible working arrangements (remote, part-time, full-time, or contract). Chance to collaborate directly with visionary leaders and make measurable impact. Potential to convert contract role into a long-term position.

Data Architect - AI-Powered Gender Intelligence System noida,uttar pradesh,india 5 years None Not disclosed On-site Full Time

Architect the schema that will revolutionize gender data We're building the "source of truth" for gender intelligence—a platform that will do for gender insights what Bloomberg did for financial data. We need a visionary data architect to design the foundation. The Mission Create the definitive data architecture for structuring the world's gender-related information, enabling instant, contextual insights that currently take weeks of manual research. What You'll Design: A scalable data ontology and tagging schema capable of capturing nuance across cultures, industries, and demographics Multi-dimensional data models linking structured, semi-structured, and unstructured sources into a unified intelligence layer Privacy-first, enterprise-grade architecture (GDPR/SOC2 compliant) with robust security and governance Real-time data versioning, lineage, and quality monitoring systems High-performance architecture optimized for AI/ML workloads and vectorized data retrieval You're Ideal If You Have: 5+ years designing complex data architectures for analytics, AI/ML, or large-scale enterprise systems Deep understanding in ontology design, knowledge graphs, and semantic modeling Hands-on experience with both structured and unstructured data pipelines at scale Familiarity with modern data stacks and emerging AI infrastructure (vector databases, embeddings, LLM integration) Proven ability to build secure, privacy-first architectures that meet enterprise compliance standards Passion for creating order from chaos. If You’re a Senior Data Engineer This is also a perfect next step if you are a Senior Data Engineer with relevant experience eager to move into an architect-level role . If you’ve already built scalable data pipelines, optimized query performance, and implemented robust ETL/ELT frameworks, this role will give you the chance to: Move from pipeline building to system-level architecture design Shape end-to-end frameworks that power AI-driven intelligence Work closely with product and AI teams to design the data foundations for large-scale LLM applications The Impact: Your architecture will become the backbone for decisions affecting women's participation in the $100 trillion global economy. From Nike designing products for women athletes in Vietnam to the World Bank allocating development funds, your schema will power it all. Join us at the intersection of AI, data, and social impact. This is a rare chance to define how the world structures and accesses gender intelligence for the next decade.

Data Architect - AI-Powered Gender Intelligence System noida,uttar pradesh,india 5 years None Not disclosed On-site Full Time

Data Architect - AI-Powered Gender Intelligence System Architect the schema that will revolutionize gender data We are building the "source of truth" for gender intelligence—a platform that will do for gender insights what Bloomberg did for financial data. We need a visionary data architect to design the foundation. The Mission Create the definitive data architecture for structuring the world's gender-related information, enabling instant, contextual insights that currently take weeks of manual research. What You'll Design: •A scalable data ontology and tagging schema capable of capturing nuance across cultures, industries, and demographics •Multi-dimensional data models linking structured, semi-structured, and unstructured sources into a unified intelligence layer •Privacy-first, enterprise-grade architecture (GDPR/SOC2 compliant) with robust security and governance •Real-time data versioning, lineage, and quality monitoring systems •High-performance architecture optimized for AI/ML workloads and vectorized data retrieval You're Ideal If You Have: •5+ years designing complex data architectures for analytics, AI/ML, or large-scale enterprise systems •Deep understanding in ontology design, knowledge graphs, and semantic modeling •Hands-on experience with both structured and unstructured data pipelines at scale •Familiarity with modern data stacks and emerging AI infrastructure (vector databases, embeddings, LLM integration) •Proven ability to build secure, privacy-first architectures that meet enterprise compliance standards •Passion for creating order from chaos. If You’re a Senior Data Engineer This is also a perfect next step if you are a Senior Data Engineer with relevant experience eager to move into an architect-level role. If you’ve already built scalable data pipelines, optimized query performance, and implemented robust ETL/ELT frameworks, this role will give you the chance to: •Move from pipeline building to system-level architecture design •Shape end-to-end frameworks that power AI-driven intelligence •Work closely with product and AI teams to design the data foundations for large-scale LLM applications The Impact: Your architecture will become the backbone for decisions affecting women's participation in the $100 trillion global economy. From Nike designing products for women athletes in Vietnam to the World Bank allocating development funds, your schema will power it all. Join us at the intersection of AI, data, and social impact. This is a rare chance to define how the world structures and accesses gender intelligence for the next decade.

Senior AI Engineer – Data Engineering | Pinecone Agentic Workflow noida,uttar pradesh,india 6 years None Not disclosed On-site Full Time

Senior AI Engineer (AWS Architect | Node.js Backend | Data Engineering | Pinecone Agentic Workflow) Job Summary: We are seeking a highly experienced Senior Full Stack Engineer with a multidisciplinary background to join our team. The ideal candidate will serve in multiple capacities—as an AWS Solutions Architect , Backend Developer (Node.js) , Database Engineer , and Data Analyst/Engineer —and will bring deep expertise in agentic AI workflows and Pinecone vector databases . You’ll be responsible for designing and building secure, scalable systems, driving data-driven decisions, and integrating cutting-edge AI agentic workflows using Pinecone for personalized and contextual automation. A hands-on approach to architecture, performance tuning, data modeling, and team collaboration in a Scrum environment is essential. Key Responsibilities: Architecture & Engineering • Architect and develop scalable solutions using AWS cloud services (Lambda, API Gateway, Cognito, RDS, S3, etc.) • Lead the end-to-end backend development using Node.js (REST APIs, microservices) • Build and optimize relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) • Integrate Pinecone vector search with agentic workflows for semantic search, AI decisioning, and personalization Agentic AI & Pinecone • Implement and optimize agent-based workflows using orchestration tools (e.g., LangGraph, LangChain, custom DAGs) • Leverage Pinecone for retrieval-augmented generation (RAG), embedding search, and LLM context enrichment • Collaborate with AI engineers to support GenAI-driven product features Data Engineering & Analytics • Build and maintain data pipelines for structured/unstructured data (ETL/ELT) • Perform data analysis, cleansing, and transformation to support business insights • Optimize data storage, indexing, and access patterns for performance Scrum & Team Collaboration • Participate in daily stand-ups, sprint planning, retrospectives • Work collaboratively with frontend, AI, and product teams to deliver features on time • Contribute to technical documentation, testing, and code reviews Programming Languages & Technologies: • Backend: Node.js (JavaScript/TypeScript), Python (for scripting and AI integration) • Frontend (optional but preferred): React, Next.js • Cloud & DevOps: AWS (Lambda, Cognito, API Gateway, RDS, DynamoDB, S3, IAM, VPC), Docker, Git, CI/CD tools • Databases: PostgreSQL, DynamoDB, Redis • AI & Agentic Tools: Pinecone, LangChain, LangGraph, OpenAI APIs • Data Engineering: SQL, Python (Pandas, NumPy), ETL tools • Other Tools: REST APIs, GraphQL (optional), Terraform or CloudFormation (nice to have) Required Qualifications: • 6+ years of experience as a full stack/backend engineer with a focus on Node.js • 3+ years of experience as an AWS architect , deploying production systems • Proven experience with RDBMS/NoSQL databases and query optimization • Deep understanding of Pinecone , vector databases , and AI agent orchestration • Prior experience as a Data Engineer , including pipeline design, analytics, and modeling • Solid grasp of agentic architectures , RAG , LangChain/LangGraph , or equivalent frameworks • Strong understanding of Scrum/Agile methodologies • Excellent communication skills and ability to lead cross-functional discussions Preferred Skills: • Experience with embedding models , HuggingFace , or LLM fine-tuning • Familiarity with frontend systems (React/Next.js) to support end-to-end development • Knowledge of data privacy and compliance (GDPR, HIPAA, SOC 2)

Senior AI Engineer (RAG | Pinecone | GenAI Workflows) noida,uttar pradesh,india 0 years None Not disclosed On-site Full Time

About the Role Uplevyl is seeking a Senior AI Engineer to lead the design and deployment of AI-powered, agentic workflows that shape the future of personalized insights. You will focus on vector search, retrieval-augmented generation (RAG), and intelligent automation , working closely with full-stack engineers and product teams to bring scalable GenAI features into production. This role is ideal for someone passionate about applied AI and engineering with the ability to build, optimize, and deploy AI systems that go beyond prototypes and into real-world, production-grade systems . Key Responsibilities AI & Agentic Workflows Design and implement RAG pipelines for semantic search, personalization, and contextual enrichment. Build agentic AI workflows using Pinecone, LangChain/LangGraph, and custom orchestration. Integrate LLM-driven features into production systems, balancing innovation with scalability. Vector Search & Data Intelligence Architect and optimize vector databases (Pinecone, FAISS, Milvus) for low-latency retrieval. Work with structured/unstructured datasets for embedding, indexing, and enrichment . Collaborate with data engineers on ETL/ELT pipelines to prepare data for AI applications. Collaboration & Agile Delivery Partner with backend and frontend engineers to integrate AI features into user-facing products. Participate in Agile ceremonies (sprint planning, reviews, standups). Maintain clear documentation and support knowledge sharing across the AI team. Tech Stack AI Tools: Pinecone, LangChain, LangGraph, OpenAI APIs (ChatGPT, GPT-4/5), HuggingFace models Languages: Python (primary for AI workflows), basic Node.js knowledge for integration Cloud & DevOps: AWS (Lambda, S3, RDS, DynamoDB, IAM), Docker, CI/CD pipelines Data Engineering: SQL, Python (Pandas, NumPy), ETL/ELT workflows, Databases (Postgres, DynamoDB, Redis) Bonus Exposure: React, Next.js Required Qualifications 5+ years in AI/ML engineering or software engineering with applied AI focus . Hands-on experience with RAG pipelines, vector databases (Pinecone, FAISS, Milvus), and LLM integration . Strong background in Python for AI workflows (embeddings, orchestration, optimization). Familiarity with agentic architectures (LangChain, LangGraph, or similar). Experience deploying and scaling AI features on AWS cloud environments . Strong collaboration and communication skills for cross-functional teamwork. Preferred Skills Experience with embedding models , HuggingFace Transformers, or fine-tuning LLMs. Knowledge of compliance frameworks (GDPR, HIPAA, SOC 2). Exposure to personalization engines, recommender systems, or conversational AI.

Senior AI Engineer noida,uttar pradesh 5 - 9 years INR Not disclosed On-site Full Time

As a Senior AI Engineer at Uplevyl, you will play a crucial role in leading the design and deployment of AI-powered, agentic workflows that drive the future of personalized insights. Your main focus will be on vector search, retrieval-augmented generation (RAG), and intelligent automation, collaborating closely with full-stack engineers and product teams to bring scalable GenAI features into production. Key Responsibilities: - Design and implement RAG pipelines for semantic search, personalization, and contextual enrichment. - Build agentic AI workflows using Pinecone, LangChain/LangGraph, and custom orchestration. - Integrate LLM-driven features into production systems, balancing innovation with scalability. - Architect and optimize vector databases (Pinecone, FAISS, Milvus) for low-latency retrieval. - Work with structured/unstructured datasets for embedding, indexing, and enrichment. - Collaborate with data engineers on ETL/ELT pipelines to prepare data for AI applications. - Partner with backend and frontend engineers to integrate AI features into user-facing products. - Participate in Agile ceremonies (sprint planning, reviews, standups). - Maintain clear documentation and support knowledge sharing across the AI team. Required Qualifications: - 5+ years in AI/ML engineering or software engineering with applied AI focus. - Hands-on experience with RAG pipelines, vector databases (Pinecone, FAISS, Milvus), and LLM integration. - Strong background in Python for AI workflows (embeddings, orchestration, optimization). - Familiarity with agentic architectures (LangChain, LangGraph, or similar). - Experience deploying and scaling AI features on AWS cloud environments. - Strong collaboration and communication skills for cross-functional teamwork. Tech Stack: - AI Tools: Pinecone, LangChain, LangGraph, OpenAI APIs (ChatGPT, GPT-4/5), HuggingFace models - Languages: Python (primary for AI workflows), basic Node.js knowledge for integration - Cloud & DevOps: AWS (Lambda, S3, RDS, DynamoDB, IAM), Docker, CI/CD pipelines - Data Engineering: SQL, Python (Pandas, NumPy), ETL/ELT workflows, Databases (Postgres, DynamoDB, Redis) - Bonus Exposure: React, Next.js Preferred Skills: - Experience with embedding models, HuggingFace Transformers, or fine-tuning LLMs. - Knowledge of compliance frameworks (GDPR, HIPAA, SOC 2). - Exposure to personalization engines, recommender systems, or conversational AI.,

Senior Full Stack Backend Developer – AWS Architect + Node.js + DB Engineering noida,uttar pradesh,india 6 years None Not disclosed On-site Full Time

Senior Full Stack Backend Developer (AWS Architect + Node.js + DB Engineer) Experience Required: 6+ Years Domain: AI-Powered Social Platform / Media / E-Commerce About the Role We are seeking a highly skilled Senior Full Stack Backend Developer with proven expertise across AWS architecture, Node.js backend development, and scalable database design . You’ll be responsible for designing, building, and optimizing scalable APIs , managing cloud infrastructure, and driving AI-first backend innovations in a fast-paced, agile environment. This role also involves collaborating closely with frontend/mobile teams and integrating seamlessly with our React Native apps . You will be expected to leverage AI tools for automation, rapid prototyping, and performance tuning — ensuring continuity with our existing architecture and enabling future scalability. Key Responsibilities • Architect, design, and maintain a highly scalable and secure backend system using Node.js and AWS services . • Build RESTful and GraphQL APIs to support mobile and web applications. • Lead infrastructure decisions on AWS (Lambda, ECS, Cognito, S3, API Gateway, DynamoDB, RDS, etc.) . • Design and optimize relational and NoSQL databases, focusing on performance, indexing, and data integrity. • Integrate AI APIs/tools into the backend for moderation, recommendation, and automation workflows. • Integrate with Stripe and PayPal payment gateway integrations • Write clean, modular, well-tested code following modern engineering practices. • Collaborate with frontend/mobile teams to deliver features aligned with product goals. • Participate in Agile/Scrum ceremonies , sprint planning, and code reviews. • Support existing application infrastructure while leading modernization efforts. Required Skills and Qualifications Backend Development • 5+ years of hands-on development in Node.js , Express/NestJS • Proficient with TypeScript, REST API standards, WebSockets, and microservices AWS & DevOps Architecture • Expert knowledge of AWS services : Cognito, Lambda, S3, API Gateway, ECS/EKS, IAM, CloudWatch • Infrastructure as Code: CloudFormation, CDK, or Terraform • Strong understanding of networking, authentication flows , and cloud security best practices Database Engineering • Experience with PostgreSQL/MySQL and NoSQL solutions like DynamoDB • Ability to design normalized schemas, manage migrations, write optimized queries, and handle large-scale data flows • Familiarity with time-series data, analytics DBs (e.g., Redshift or Athena) is a plus AI-Driven Development • Integrating with AI APIs (e.g., OpenAI, AWS Comprehend, Rekognition) • Use of AI tools (e.g., Copilot, ChatGPT, AI code generation) for rapid prototyping and efficiency • Comfortable debugging or extending AI-assisted modules in production Testing & CI/CD • Experience with unit/integration testing using Jest, Mocha, or similar • Familiarity with GitHub Actions, CodePipeline, or similar CI/CD tools Security & Compliance • JWT/OAuth2 token handling and RBAC implementation • Understanding of data privacy (GDPR) , secure data flows, and encryption Soft Skills • Strong communication and cross-functional collaboration • Product-first thinking and an eye for scalable design • Ability to rapidly adapt and extend existing backend codebase with minimal disruption Comfortable working in Scrum and participating in agile delivery

Senior AI Engineer (RAG | Pinecone | GenAI Workflows) noida,uttar pradesh,india 0 years None Not disclosed On-site Full Time

About the Role Uplevyl is seeking a Senior AI Engineer to lead the design and deployment of AI-powered, agentic workflows that shape the future of personalized insights. You will focus on vector search, retrieval-augmented generation (RAG), and intelligent automation , working closely with full-stack engineers and product teams to bring scalable GenAI features into production. This role is ideal for someone passionate about applied AI and engineering with the ability to build, optimize, and deploy AI systems that go beyond prototypes and into real-world, production-grade systems . Key Responsibilities AI & Agentic Workflows Design and implement RAG pipelines for semantic search, personalization, and contextual enrichment. Build agentic AI workflows using Pinecone, LangChain/LangGraph, and custom orchestration. Integrate LLM-driven features into production systems, balancing innovation with scalability. Vector Search & Data Intelligence Architect and optimize vector databases (Pinecone, FAISS, Milvus) for low-latency retrieval. Work with structured/unstructured datasets for embedding, indexing, and enrichment . Collaborate with data engineers on ETL/ELT pipelines to prepare data for AI applications. Collaboration & Agile Delivery Partner with backend and frontend engineers to integrate AI features into user-facing products. Participate in Agile ceremonies (sprint planning, reviews, standups). Maintain clear documentation and support knowledge sharing across the AI team. Tech Stack AI Tools: Pinecone, LangChain, LangGraph, OpenAI APIs (ChatGPT, GPT-4/5), HuggingFace models Languages: Python (primary for AI workflows), basic Node.js knowledge for integration Cloud & DevOps: AWS (Lambda, S3, RDS, DynamoDB, IAM), Docker, CI/CD pipelines Data Engineering: SQL, Python (Pandas, NumPy), ETL/ELT workflows, Databases (Postgres, DynamoDB, Redis) Bonus Exposure: React, Next.js Required Qualifications 5+ years in AI/ML engineering or software engineering with applied AI focus . Hands-on experience with RAG pipelines, vector databases (Pinecone, FAISS, Milvus), and LLM integration . Strong background in Python for AI workflows (embeddings, orchestration, optimization). Familiarity with agentic architectures (LangChain, LangGraph, or similar). Experience deploying and scaling AI features on AWS cloud environments . Strong collaboration and communication skills for cross-functional teamwork. Preferred Skills Experience with embedding models , HuggingFace Transformers, or fine-tuning LLMs. Knowledge of compliance frameworks (GDPR, HIPAA, SOC 2). Exposure to personalization engines, recommender systems, or conversational AI.

Senior AI Engineer (RAG | Pinecone | GenAI Workflows) noida,uttar pradesh,india 5 - 7 years INR Not disclosed On-site Full Time

About the Role Uplevyl is seeking a Senior AI Engineer to lead the design and deployment of AI-powered, agentic workflows that shape the future of personalized insights. You will focus on vector search, retrieval-augmented generation (RAG), and intelligent automation , working closely with full-stack engineers and product teams to bring scalable GenAI features into production. This role is ideal for someone passionate about applied AI and engineering with the ability to build, optimize, and deploy AI systems that go beyond prototypes and into real-world, production-grade systems . Key Responsibilities AI & Agentic Workflows Design and implement RAG pipelines for semantic search, personalization, and contextual enrichment. Build agentic AI workflows using Pinecone, LangChain/LangGraph, and custom orchestration. Integrate LLM-driven features into production systems, balancing innovation with scalability. Vector Search & Data Intelligence Architect and optimize vector databases (Pinecone, FAISS, Milvus) for low-latency retrieval. Work with structured/unstructured datasets for embedding, indexing, and enrichment . Collaborate with data engineers on ETL/ELT pipelines to prepare data for AI applications. Collaboration & Agile Delivery Partner with backend and frontend engineers to integrate AI features into user-facing products. Participate in Agile ceremonies (sprint planning, reviews, standups). Maintain clear documentation and support knowledge sharing across the AI team. Tech Stack AI Tools: Pinecone, LangChain, LangGraph, OpenAI APIs (ChatGPT, GPT-4/5), HuggingFace models Languages: Python (primary for AI workflows), basic Node.js knowledge for integration Cloud & DevOps: AWS (Lambda, S3, RDS, DynamoDB, IAM), Docker, CI/CD pipelines Data Engineering: SQL, Python (Pandas, NumPy), ETL/ELT workflows, Databases (Postgres, DynamoDB, Redis) Bonus Exposure: React, Next.js Required Qualifications 5+ years in AI/ML engineering or software engineering with applied AI focus . Hands-on experience with RAG pipelines, vector databases (Pinecone, FAISS, Milvus), and LLM integration . Strong background in Python for AI workflows (embeddings, orchestration, optimization). Familiarity with agentic architectures (LangChain, LangGraph, or similar). Experience deploying and scaling AI features on AWS cloud environments . Strong collaboration and communication skills for cross-functional teamwork. Preferred Skills Experience with embedding models , HuggingFace Transformers, or fine-tuning LLMs. Knowledge of compliance frameworks (GDPR, HIPAA, SOC 2). Exposure to personalization engines, recommender systems, or conversational AI. Show more Show less

Data Engineer - Gender Intelligence Platform noida,uttar pradesh,india 3 - 5 years INR Not disclosed On-site Full Time

About the Role: Join us in building the future of data-driven gender equality We&aposre assembling a world-class team to create a groundbreaking AI platform that will transform how organizations understand and serve half the world&aposs population. Think Harvey AI, but for one of the most important and overlooked domains in business intelligence. The Mission You&aposll be instrumental in building infrastructure that turns fragmented public data into actionable gender intelligence, helping Fortune 500 companies, governments, and NGOs make better decisions that impact billions of women worldwide. What You&aposll Do: Design and implement scalable data pipelines ingesting from 100+ global sources (World Bank, UN, OECD, national statistics offices) Build real-time data refresh systems that keep our intelligence current across 50+ countries Create robust ETL/ELT workflows handling diverse formats (APIs, CSVs, PDFs, unstructured text) Implement enterprise-grade data isolation and multi-tenancy architecture Optimize vector databases for lightning-fast retrieval of gender-segmented insights You&aposre Perfect If You Have: 3+ years building production data pipelines at scale Experience with modern data stack (Airflow/Dagster, dbt, Snowflake/BigQuery) Strong Python/SQL skills and cloud expertise (AWS/GCP/Azure) Passion for using technology to drive social impact Curiosity about turning "messy" public data into gold Why This Matters: Every day, critical decisions affecting women&aposs economic participation, health, and opportunities are made with incomplete data. Your work will power instant, accurate insights that could redirect billions in investment, shape better products, and create more equitable policies. Show more Show less

Technical Project Manager & Scrum Master noida,uttar pradesh,india 4 years None Not disclosed On-site Full Time

Job Title: AI Engineer Location: Onsite – Noida Type: Full-time About Us At Uplevyl , we're redefining what intelligent communities look like. Through our AI-powered agents, we’re building scalable, agentic community systems that reduce manual work and increase member engagement—especially for women-centric organizations. We’re looking for someone who gets community —because they’ve built one. Someone who understands the operational bottlenecks and the daily wins of growing a thriving space. Most importantly, someone who can now take that expertise and help us bring this new generation of agentic communities to life. Key Responsibilities Design, develop, and deploy LLM-based AI solutions tailored for scalable community systems. Implement advanced RAG models, embedding pipelines, and vector databases (e.g., Pinecone, FAISS, Qdrant). Work with multi-agent orchestration frameworks to build adaptive, intelligent workflows. Fine-tune, pre-train, and evaluate LLMs for domain-specific applications. Integrate AI/ML systems with AWS services such as SageMaker, Bedrock, ECS, and Cognito (or equivalent). Collaborate with product and community teams to understand real-world use cases and translate them into robust AI solutions. Ensure ethical AI practices when handling domain-sensitive datasets. Stay updated on the latest AI research and apply innovative approaches to improve scalability and efficiency. Performance Expectations Deliver high-quality, production-ready AI models and pipelines within agreed timelines. Demonstrate measurable improvements in community engagement and operational efficiency through AI-driven systems. Maintain high standards of ethical AI practices and data governance. Proactively identify challenges in scaling agentic communities and propose innovative solutions. Communicate technical concepts clearly to both technical and non-technical stakeholders. Opportunities & Challenges Opportunities: Work on cutting-edge AI systems at the intersection of community, engagement, and automation. Contribute to building next-generation multi-agent and RAG-based solutions that will set industry benchmarks. Be part of a mission-driven company focused on empowering women-centric organizations. Challenges: Balancing innovation with scalability in real-world deployments. Navigating complexities of sensitive datasets while ensuring privacy and ethical AI usage. Driving adoption of agentic community systems in diverse organizational settings. Qualifications Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or related field. 4+ years of experience in LLM-based solutions, NLP, or AI system design. Proven experience with RAG models, vector databases (Pinecone, FAISS, Qdrant), and embedding pipelines. Strong expertise in Python, PyTorch/TensorFlow, LangChain, LangGraph, Hugging Face Transformers. Familiarity with AWS AI/ML stack (SageMaker, Bedrock, ECS, Cognito) or equivalent. Experience in handling domain-sensitive datasets and applying ethical AI practices. Hands-on experience with fine-tuning, pre-training, and evaluation of LLMs.

AI Engineer noida,uttar pradesh,india 4 years None Not disclosed On-site Full Time

Job Title: AI Engineer Location: Onsite – Noida Type: Full-time About Us At Uplevyl , we're redefining what intelligent communities look like. Through our AI-powered agents, we’re building scalable, agentic community systems that reduce manual work and increase member engagement—especially for women-centric organizations. We’re looking for someone who gets community —because they’ve built one. Someone who understands the operational bottlenecks and the daily wins of growing a thriving space. Most importantly, someone who can now take that expertise and help us bring this new generation of agentic communities to life. Key Responsibilities Design, develop, and deploy LLM-based AI solutions tailored for scalable community systems. Implement advanced RAG models, embedding pipelines, and vector databases (e.g., Pinecone, FAISS, Qdrant). Work with multi-agent orchestration frameworks to build adaptive, intelligent workflows. Fine-tune, pre-train, and evaluate LLMs for domain-specific applications. Integrate AI/ML systems with AWS services such as SageMaker, Bedrock, ECS, and Cognito (or equivalent). Collaborate with product and community teams to understand real-world use cases and translate them into robust AI solutions. Ensure ethical AI practices when handling domain-sensitive datasets. Stay updated on the latest AI research and apply innovative approaches to improve scalability and efficiency. Performance Expectations Deliver high-quality, production-ready AI models and pipelines within agreed timelines. Demonstrate measurable improvements in community engagement and operational efficiency through AI-driven systems. Maintain high standards of ethical AI practices and data governance. Proactively identify challenges in scaling agentic communities and propose innovative solutions. Communicate technical concepts clearly to both technical and non-technical stakeholders. Opportunities & Challenges Opportunities: Work on cutting-edge AI systems at the intersection of community, engagement, and automation. Contribute to building next-generation multi-agent and RAG-based solutions that will set industry benchmarks. Be part of a mission-driven company focused on empowering women-centric organizations. Challenges: Balancing innovation with scalability in real-world deployments. Navigating complexities of sensitive datasets while ensuring privacy and ethical AI usage. Driving adoption of agentic community systems in diverse organizational settings. Qualifications Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or related field. 4+ years of experience in LLM-based solutions, NLP, or AI system design. Proven experience with RAG models, vector databases (Pinecone, FAISS, Qdrant), and embedding pipelines. Strong expertise in Python, PyTorch/TensorFlow, LangChain, LangGraph, Hugging Face Transformers. Familiarity with AWS AI/ML stack (SageMaker, Bedrock, ECS, Cognito) or equivalent. Experience in handling domain-sensitive datasets and applying ethical AI practices. Hands-on experience with fine-tuning, pre-training, and evaluation of LLMs.

Back End Developer noida,uttar pradesh,india 5 years None Not disclosed On-site Full Time

Job Title: Backend Engineer Location: Onsite – Noida Type: Full-time About Us At Uplevyl , we're redefining what intelligent communities look like. Through our AI-powered agents, we’re building scalable, agentic community systems that reduce manual work and increase member engagement—especially for women-centric organizations. We’re looking for someone who gets community —because they’ve built one. Someone who understands the operational bottlenecks and the daily wins of growing a thriving space. Most importantly, someone who can now take that expertise and help us bring this new generation of agentic communities to life. Key Responsibilities Design, build, and maintain robust backend services for large-scale, multi-tenant SaaS applications. Develop and optimize GraphQL APIs for seamless integration across products and AI-driven systems. Work with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, DynamoDB) databases, ensuring efficient schema design and performance. Architect, deploy, and scale microservices in containerized environments (Docker/Kubernetes). Implement event-driven architectures using Kafka, Kinesis, or RabbitMQ. Leverage AWS services (ECS, Lambda, RDS, S3, Cognito) to ensure high availability, security, and scalability. Collaborate with frontend and AI teams to deliver integrated solutions for intelligent community systems. Troubleshoot, debug, and optimize backend applications for performance and reliability. Performance Expectations Deliver secure, scalable, and production-ready backend services within project timelines. Maintain high system uptime and performance across distributed environments. Proactively address architectural challenges in multi-tenant SaaS and event-driven systems. Demonstrate strong ownership of backend infrastructure, from design through deployment. Communicate effectively with cross-functional teams to ensure smooth product integration. Opportunities & Challenges Opportunities: Work on backend systems powering next-generation AI-driven community platforms. Build multi-tenant architecture that can scale globally. Gain exposure to AI integration and advanced API ecosystems. Challenges: Architecting for scale while maintaining high performance and low latency. Ensuring fault tolerance and reliability in distributed event-driven systems. Navigating complex SaaS deployments across cloud-native environments. Qualifications 5+ years of experience as a Backend Engineer in large-scale applications. Strong expertise in Nest.js (Node.js) with hands-on experience building GraphQL APIs. Experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, DynamoDB) databases. Solid understanding of microservices architecture and containerization (Docker/Kubernetes). Hands-on experience with AWS stack (ECS, Lambda, RDS, S3, Cognito). Knowledge of event-driven systems (Kafka, Kinesis, RabbitMQ). Familiarity with AI and API integration is a strong plus. Experience designing and scaling multi-tenant SaaS architectures.