Namasys Analytics

4 Job openings at Namasys Analytics
Data Engineer (AWS | Python | PySpark | SQL) Delhi,India 7 years None Not disclosed Remote Full Time

We’re Hiring: Data Engineer (AWS | Python | PySpark | SQL) Work Type: 100% Remote Experience: 5–7 years Notice Period: Immediate joiners preferred; up to 15 days acceptable About the Role: We are seeking a highly skilled Data Engineer to design, develop, and optimize robust data pipelines and cloud-based architectures. You will work with cutting-edge AWS services, Python, PySpark, SQL, and Redshift to deliver scalable, reliable, and high-performance data solutions. Key Responsibilities: Design and implement scalable data pipelines using AWS services and PySpark. Develop data workflows and ETL processes using Python and SQL. Optimize and manage Redshift databases for performance, scalability, and data integrity. Monitor, troubleshoot, and optimize data systems for cost-effectiveness and efficiency. Must-Have Skills: Redshift – Expertise in advanced querying, optimization, and database management. AWS – Hands-on experience with Glue, S3, EMR, Lambda. Python – Strong scripting, automation, and data transformation skills. PySpark – Proven experience handling large datasets and distributed data processing. Nice-to-Have Skills: Familiarity with data warehousing concepts and big data architecture. Why Join Us? 100% remote work flexibility. Opportunity to work on cutting-edge data engineering projects. Collaborative and innovative work culture. Compensation: Competitive and commensurate with skills and experience — no limits for exceptional talent. 📩 Apply now and be part of a team that’s redefining data-driven solutions. Interested candidates may share their resumes at hr@namasys.ai.

Artificial Intelligence Engineer delhi,india 1 - 3 years None Not disclosed Remote Full Time

We’re Hiring: AI Engineer Work Type: 100% Remote Experience: 1.5-3 years Notice Period: Immediate joiners preferred; up to 30 days acceptable About the Role: We're looking for a passionate AI Developer with deep experience in LLMs and Agentic AI to join our fully remote team. What You’ll Do: Design, build, and optimize solutions using cutting-edge Large Language Models ( LLMs ) such as ChatGPT, Claude, LLaMA, and Mistral (both online APIs and local/offline deployments). Develop Agentic AI systems capable of autonomous task execution and intelligent decision-making. Apply data science and machine learning techniques for tasks such as predictive modeling, classification, clustering, and data analysis. Collaborate with cross-functional teams to integrate AI components into larger platforms or workflows. Continuously research and experiment with emerging AI and ML technologies to keep our solutions ahead of the curve. What We’re Looking For: Strong experience with modern LLMs (OpenAI, Anthropic, Meta, Mistral, etc.). Proven experience developing Agentic AI systems and frameworks (e.g., LangGraph , AutoGen , CrewAI, or custom implementations). Deep knowledge of data science and ML, including experience with supervised/unsupervised learning, time series forecasting, model evaluation, etc. Proficiency in Python and experience with AI/ML libraries such as PyTorch , TensorFlow , scikit-learn, Hugging Face Transformers, LangChain , etc. Experience with model fine-tuning, vector databases (e.g., FAISS, Chroma, Pinecone), and retrieval-augmented generation (RAG) is a big plus. Ability to work independently in a remote, fast-paced environment. Why Join Us? 100% remote work flexibility. Opportunity to work on cutting-edge AI engineering projects. Collaborative and innovative work culture. Compensation: Competitive and commensurate with skills and experience — no limits for exceptional talent. 📩 Apply now and be part of a team that’s redefining data-driven solutions. Interested candidates may share their resumes at hr@namasys.ai.

Program Manager delhi,india 9 years None Not disclosed Remote Full Time

We’re Hiring: Program Manager Work from home : Yes (May require travel out of India to client place based on project requirements) Notice Period : Immediate to 30 days Experience : 9+ years Role Overview We are seeking a highly experienced Program Manager to lead a Managed Services engagement focused on enterprise reporting and analytics. The program involves managing and delivering reporting through Power BI Report Server and Power BI Service . The Program Manager will be responsible for governance, stakeholder management, delivery oversight, and continuous improvement of the reporting ecosystem. Key Responsibilities Program Management & Governance Lead end-to-end delivery of the Managed Services program, ensuring adherence to SLAs and KPIs. Establish and manage governance frameworks for reporting, security, and compliance. Manage vendor/client relationships, acting as the single point of accountability for program success. Stakeholder Engagement Collaborate with CXOs, business leaders, and IT stakeholders to translate business needs into actionable outcomes. Conduct regular steering committee meetings, program reviews, and status updates. Technical Oversight Oversee BI architecture integrating SAP HANA and Power BI (Service & Report Server) . Ensure performance, scalability, and data governance across all reporting solutions. Validate KPIs, reporting logic, and dashboard usability for executive and operational stakeholders. Team Leadership Lead and mentor a cross-functional team of BI developers, data engineers, and analysts. Drive adoption of best practices in DAX, SQL, Power Query, and data modeling . Build a self-service analytics culture by enabling reusable assets and standardized templates. Continuous Improvement Implement proactive monitoring, alerting, and automation for reporting. Ensure optimal usage of licensing, gateways, and deployment pipelines. Identify opportunities for ML/AI integration and advanced analytics. Required Skills & Experience 9+ years of experience in Business Intelligence, Data Analytics, and Program Management. Proven track record in managing BI programs with SAP HANA and Power BI (Service/Report Server) . Strong background in SQL, data modeling (star/snowflake), and BI solution architecture . Experience with DevOps, CI/CD, and automation frameworks in BI environments. Hands-on exposure to data governance, Row-Level Security (RLS), and compliance practices. Prior experience in global, multi-stakeholder programs with complex delivery landscapes. Strong leadership and communication skills to drive executive conversations and manage distributed teams. Preferred Qualifications Microsoft Power BI Data Analyst Associate (PL-300) or equivalent certification. SAP HANA certification (e.g., ABAP on HANA 2.0 Specialist). Exposure to Azure Synapse, Databricks, or Microsoft Fabric . Experience working with government clients is an advantage. Why Join Us? Work from home (May require travel out of India to client place based on project requirements) Collaborative and innovative work culture. Compensation: Competitive and commensurate with skills and experience — no limits for exceptional talent. 📩 Apply now and be part of a team that’s redefining data-driven solutions. Interested candidates may share their resumes at hr@namasys.ai.

Data Engineer (AWS | Python | PySpark | SQL) delhi,india 7 years None Not disclosed Remote Full Time

We’re Hiring: Data Engineer (AWS | Python | PySpark | SQL) Work Type: 100% Remote Experience: 5–7 years Notice Period: Immediate joiners preferred; up to 15 days acceptable About the Role: We are seeking a highly skilled Data Engineer to design, develop, and optimize robust data pipelines and cloud-based architectures. You will work with cutting-edge AWS services, Python, PySpark, SQL, and Redshift to deliver scalable, reliable, and high-performance data solutions. Key Responsibilities: Design and implement scalable data pipelines using AWS services and PySpark. Develop data workflows and ETL processes using Python and SQL. Optimize and manage Redshift databases for performance, scalability, and data integrity. Monitor, troubleshoot, and optimize data systems for cost-effectiveness and efficiency. Must-Have Skills: Redshift – Expertise in advanced querying, optimization, and database management. AWS – Hands-on experience with Glue, S3, EMR, Lambda. Python – Strong scripting, automation, and data transformation skills. PySpark – Proven experience handling large datasets and distributed data processing. Nice-to-Have Skills: Familiarity with data warehousing concepts and big data architecture. Why Join Us? 100% remote work flexibility. Opportunity to work on cutting-edge data engineering projects. Collaborative and innovative work culture. Compensation: Competitive and commensurate with skills and experience — no limits for exceptional talent. 📩 Apply now and be part of a team that’s redefining data-driven solutions. Interested candidates may share their resumes at hr@namasys.ai.