Jobs
Interviews

Koantek

5 Job openings at Koantek
Solution Architect Mumbai,Delhi / NCR,Bengaluru 7 - 11 years INR 50.0 - 60.0 Lacs P.A. Work from Office Full Time

Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Senior Data Engineer navi mumbai 5 - 10 years INR 15.0 - 30.0 Lacs P.A. Work from Office Full Time

About Koantek Koantek is a Databricks Pure-Play Elite Partner, helping enterprises modernize faster and unlock the full power of Data and AI. Backed by Databricks Ventures and honored as a six-time Databricks Partner of the Year, we enable global enterprises to modernize at speed, operationalize AI, and realize the full value of their data. Our deep expertise spans industries such as healthcare, financial services, retail, and SaaS, delivering end-to-end solutions from rapid prototyping to production-scale AI deployments. Description: As a Senior Data Engineer at Koantek, you will leverage advanced data engineering techniques and analytics to support business decisions for our clients. Your role will involve designing and building robust data pipelines, integrating structured and unstructured data from various sources, and developing tools for data processing and analysis. You will play a pivotal role in managing data infrastructure, optimizing data workflows, and guiding data-driven strategies while working closely with data scientists and other stakeholders. The impact you will have: Guide Big Data Transformations: Implementation of comprehensive big data projects, including the development and deployment of innovative big data and AI applications. Ensure Best Practices: Guarantee that Databricks best practices are applied throughout all projects to maintain high-quality service and successful implementation. Support Project Management: Assist the Professional Services leader and project managers with estimating efforts and managing risks within customer proposals and statements of work. Architect Complex Solutions: Design, develop, deploy, and document complex customer engagements, either independently or as part of a technical team, serving as the technical lead and authority. Enable Knowledge Transfer: Facilitate the transfer of knowledge and provide training to team members, customers, and partners, including the creation of reusable project documentation. Contribute to Consulting Excellence: Share expertise with the consulting team and offer best practices for client engagement, enhancing the effectiveness and efficiency of other teams. What we look for: Qualifications: Educational Background: Bachelors degree in Computer Science, Information Technology, or a related field (or equivalent experience). Experience: 6+ years of experience as a Data Engineer, with proficiency in at least two major cloud platforms (AWS, Azure, GCP). Proven experience in designing, developing, and implementing comprehensive data engineering solutions using Databricks, specifically for large-scale data processing and integration projects Develop scalable streaming and batch solutions using cloud-native components. Perform data transformation tasks, including cleansing, aggregation, enrichment, and normalization, utilizing Databricks and related technologies. Experience in applying DataOps principles and implementing CI/CD and DevOps practices within data environments to optimize development and deployment workflows. Technical Skills: Expert-level proficiency in Spark Scala, Python, and PySpark. In-depth knowledge of data architecture, including Spark Streaming, Spark Core, Spark SQL, and data modeling. Hands-on experience with various data management technologies and tools, such as Kafka, StreamSets, and MapReduce. Proficient in using advanced analytics and machine learning frameworks, including Apache Spark MLlib, TensorFlow, and PyTorch, to drive data insights and solutions. Databricks Specific Skills: Extensive experience in data migration from on-premises to cloud environments and in implementing data solutions on Databricks across cloud platforms (AWS, Azure, GCP). Skilled in designing and executing end-to-end data engineering solutions using Databricks, focusing on large-scale data processing and integration. Proven hands-on experience with Databricks administration and operations, including notebooks, clusters, jobs, and data pipelines. Experience integrating Databricks with other data tools and platforms to enhance overall data management and analytics capabilities. Preferred qualifications: Certifications: Certification in Databricks Engineering (Professional) Microsoft Certified: Azure Data Engineer Associate GCP Certified: Professional Google Cloud Certified. AWS Certified Solutions Architect Professional

Data Scientist hyderabad,mumbai (all areas) 4 - 9 years INR 20.0 - 35.0 Lacs P.A. Hybrid Full Time

Location: HYD (preferable)/Remote (Pan-India) with On-Site as Needed Experience: 4-8 years Employment Type: Full-time About Koantek: Koantek is a Databricks Pure-Play Elite Partner, helping enterprises modernize faster and unlock the full power of Data and AI. Backed by Databricks Ventures and honored as a six-time Databricks Partner of the Year, we enable global enterprises to modernize at speed, operationalize AI, and realize the full value of their data. Our deep expertise spans industries such as healthcare, financial services, retail, and SaaS, delivering end-to-end solutions from rapid prototyping to production-scale AI deployments. We deliver tailored solutions that enable businesses to leverage data for growth and innovation. Our team of experts utilizes deep industry knowledge combined with cutting-edge technologies, tools, and methodologies to drive impactful results. By partnering with clients across a diverse range of industries from emerging startups to established enterprises we help them uncover new opportunities and achieve a competitive advantage in the digital age. About the Role We are looking for a Data Scientist with 4 to 8 years of experience in developing Natural Language Processing (NLP) and Generative AI (GenAI) solutions. The ideal candidate is hands- on with the ability to rapidly research, design and build state-of-the-art prototypes for both internal R&D and live customer projects. Experience with Databricks (especially MLOps Stacks) is highly desirable. Key Responsibilities : Translate business challenges into solvable NLP and GenAI use cases, such as document understanding, web search, automated Q&A, summarization, and workflow automation. Stay updated with the latest GenAI/LLM advancements and evaluate them for feasibility and potential use. Design, build, and deploy LLM-powered retrieval-augmented generation (RAG) pipelines and agentic AI solutions, including multi-step reasoning systems,tool-using agents, and associated pipelines. Build basic UI frontends (e.g., using Streamlit, Flask) for internal demos or client-facing pilot GenAI applications. Apply MLOps best practices including MLflow-based tracking, Docker containerization, and CI/CD for GenAI pipelines. Develop customer demos and prototypes using Databricks MosaicAI suite. Contribute to both internal R&D efforts and customer implementations, including rapid POCs and scalable production deployments. Required Qualifications : 4-8 years of implementation experience in machine learning, with a strong focus on NLP and GenAI applications in a customer-facing role. Must have productionized machine learning or deep learning models. Familiarity with SQL and working with large, complex datasets. Proficiency in Python and NLP/LLM libraries/tools such as HuggingFace Transformers, LangChain, LangGraph, LlamaIndex, etc. Hands-on experience with prompt engineering, chunking, vector embeddings, semantic search, RAG pipelines, and LLM fine-tuning. Understanding of GenAI-specific challenges - hallucination, prompt security, rate limits, cost optimization, etc. Strong foundation in statistics, including: Model assumptions and diagnostics Evaluation metrics and error analysis Probabilistic modeling, hypothesis testing, and uncertainty quantification Feature importance and interpretability techniques Experience in MLOps tools and processes, including: Model versioning and experiment tracking (e.g., MLflow) Containerization (Docker) CI/CD for ML workflows (e.g., GitHub Actions, Azure DevOps, or similar) Model monitoring and retraining workflows Must: Hands-on experience with Databricks for model development and deployment. Must: Familiarity with cloud environments and the native AI/ML-related tools/services (Azure, AWS, or GCP). Strong analytical and communication skills, with a demonstrated ability to convert business requirements into NLP/GenAI solutions. Educational Background Bachelors or Masters degree in Computer Science, Data Science, Mathematics, Statistics, Operational Research, or a related quantitative discipline. Relevant certifications (e.g., Databricks certifications, AWS/Azure/GCP AI/ML certifications) are a plus. Workplace Flexibility This is a hybrid role with remote flexibility. On-site presence at customer locations MAY be required based on project and business needs. Candidates should be willing and able to travel for short or medium-term assignments when necessary.

Solution / Data Architect maharashtra 7 - 11 years INR Not disclosed On-site Full Time

As a Databricks AWS/Azure/GCP Architect at Koantek based in Mumbai, you will play a crucial role in building secure and highly scalable big data solutions that drive tangible, data-driven outcomes while emphasizing simplicity and operational efficiency. Collaborating with teammates, product teams, and cross-functional project teams, you will lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture. Your responsibilities will include implementing securely architected big data solutions that are operationally reliable, performant, and aligned with strategic initiatives. Your expertise should include an expert-level knowledge of data frameworks, data lakes, and open-source projects like Apache Spark, MLflow, and Delta Lake. You should possess hands-on coding experience in Spark/Scala, Python, or Pyspark. An in-depth understanding of Spark Architecture, including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, and Spark MLib, is essential for this role. Experience in IoT/event-driven/microservices in the cloud, familiarity with private and public cloud architectures, and extensive hands-on experience in implementing data migration and data processing using AWS/Azure/GCP services are key requirements. With over 9 years of consulting experience and a minimum of 7 years in data engineering, data platform, and analytics, you should have a proven track record of delivering projects with hands-on development experience on Databricks. Your knowledge of at least one cloud platform (AWS, Azure, or GCP) is mandatory, along with deep experience in distributed computing with Spark and familiarity with Spark runtime internals. Additionally, you should be familiar with CI/CD for production deployments, optimization for performance and scalability, and have completed data engineering professional certification and required classes. If you are a results-driven professional with a passion for architecting cutting-edge big data solutions and have the desired skill set, we encourage you to apply for this exciting opportunity.,

Resident Solution Architect hyderabad,chennai,bengaluru 8 - 12 years INR 30.0 - 45.0 Lacs P.A. Work from Office Full Time

About Koantek: Koantek is a Databricks Pure-Play Elite Partner, helping enterprises modernize faster and unlock the full power of Data and AI. Backed by Databricks Ventures and honored as a six- As time Databricks Partner of the Year, we enable global enterprises to modernize at speed, operationalize AI, and realize the full value of their data. Our deep expertise spans industries such as healthcare, financial services, retail, and SaaS, delivering end-to-end solutions from rapid prototyping to production-scale AI deployments. We deliver tailored solutions that enable businesses to leverage data for growth and innovation. Our team of experts utilizes deep industry knowledge combined with cutting-edge technologies, tools, and methodologies to drive impactful results. By partnering with clients across a diverse range of industries, from emerging startups to established enterprises we help them uncover new opportunities and achieve a competitive advantage in the digital age. About the Role: As a Solutions Architect at Koantek, you will collaborate with customers to design scalable data architectures utilizing Databricks technology and services. The RSA at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind. Leveraging your technical expertise and business acumen, you will navigate complex technology discussions, showcasing the value of the Databricks platform throughout the sales process. Working alongside Account Executives, you will engage with customers' technical leaders, including architects, engineers, and operations teams, aiming to become a trusted advisor who delivers concrete outcomes. This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Platform into the enterprise ecosystem and AWS/Azure/GCP architecture. The impact you will have: Develop Account Strategies: Work with Sales and other essential partners to develop strategies for your assigned accounts to grow their usage of Databricks platform. Establish Architecture Standards: Establish the Databricks Lakehouse architecture as the standard data architecture for customers through excellent. Technical account planning. Demonstrate Value: Build and present reference architectures and demos applications to help prospects understand how Databricks can be used to achieve their goals and land new use cases. Capture Technical Wins: Consult on big data architectures, data engineering pipelines, and data science/machine learning projects to prove out Databricks technology for strategic customer projects. Validate integrations with cloud services and other third-party applications. Promote Open-Source Projects: Become an expert in and promote Databricks- inspired open-source projects (Spark, Delta Lake, MLflow) acrothe ss developer communities through meetups, conferences, and webinars. Technical Expertise: Experience translating a customer's business needs to technology solutions, including establishing buy-in with essential customer stakeholders at all levels of the business. Experienced at designing, architecting, and presenting data systems for customers and managing the delivery of production solutions of those data architectures. Projects delivered with hands-on experience in development on databricks Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Spark/Scala, Python or PySpark In-depth understanding of Spark Architecture, including Spark Core, Spark SQL, and Data Frames, Spark Streaming, RDD caching, Spark MLibT/event-driven/microservices in the cloud Deep experience with distributed computing with spark with knowledge of spark runtime Experience with private and public cloud architectures, pros/cons, and migration considerations. Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Familiarity with CI/CD for production deployments Familiarity with optimization for performance and scalability Completed data engineering professional certification and required classes SQL Proficiency: Fluent in SQL and database technology Educational Background: Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Researh). Relevant certifications (e.g., Databricks certifications, AWS/Azure/GCP AI/ML certificatins) are a plus. Workplace Flexibility This is a hybrid role with remote flexibility. On-site presence at customer locations MAY be required based on project and business needs. Candidates should be willing and able to travel for short or medium-term assignments when necessary.