We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities:Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage.Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration.Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles.Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking.Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization.Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies.Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS.Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions.Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Required Qualifications & Skills:Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM).Proficiency in programming languages: Python, PySpark, and AWS Lambda.Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation.Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus.Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles.Experience in designing and implementing large-scale, high-performance data solutions.Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions.Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications:AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent).Experience with real-time data streaming (Kafka, Kinesis, or similar).Familiarity with Infrastructure as Code (Terraform, CloudFormation).Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.
We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms, ensuring real-time and accurate server-side event tracking. Utilize OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms, ensuring user NFAs are respected, and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS), Cosmos DB, and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager, including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azure’s data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills Strong hands-on experience in Python and building scalable APIs. Experience in implementing Meta CAPI, Google Enhanced Conversions, and other platform-specific server-side tracking APIs. Proficiency with Azure Cloud technologies, Azure Functions, ADF, Key Vault, ADLS, and Azure security best practices. Knowledge of Azure Clean Rooms, with experience developing custom logic and code for clean data collaborations. Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management, specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes. Show more Show less
We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms, ensuring real-time and accurate server-side event tracking. Develop and manage custom tracking solutions leveraging Azure Clean Rooms, ensuring user NFAs are respected and privacy-compliant logic is implemented. Architect and develop secure REST APIs in C# to support advanced attribution models and marketing analytics pipelines. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS), Cosmos DB, and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager, including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azure’s data security standards and enterprise privacy frameworks. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills Strong hands-on experience with C# and building scalable APIs. Experience in implementing Meta CAPI, Google Enhanced Conversions, and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms, with experience developing custom logic and code for clean data collaborations. Proficiency with Azure Cloud technologies, especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS, and Azure security best practices. Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management, specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes. Show more Show less
As a Big Data Engineer, you will be responsible for expanding and optimizing the data and database architecture, as well as optimizing data flow and collection for cross-functional teams. You should be an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. Your role will involve supporting software developers, database architects, data analysts, and data scientists on data initiatives, ensuring optimal data delivery architecture is consistent throughout ongoing projects. You must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. You should have sound knowledge in Spark architecture and distributed computing, including Spark streaming. Proficiency in Spark, including RDD and DataFrames core functions, troubleshooting, and performance tuning is essential. A good understanding of object-oriented concepts and hands-on experience with Scala/Java/Kotlin, along with excellent programming logic and technique, is required. Additionally, experience in functional programming and OOPS concepts in Scala/Java/Kotlin is necessary. Your responsibilities will include managing a team of Associates and Senior Associates, ensuring proper utilization across projects, and mentoring new members for project onboarding. You should be able to understand client requirements, design, develop, and deliver solutions from scratch. Experience in AWS cloud would be preferable, along with the ability to analyze, re-architect, and re-platform on-premises data warehouses to data platforms on the cloud. Leading client calls to address delays, blockers, escalations, and requirements collation, managing project timing, client expectations, and meeting deadlines are key aspects of the role. Project and team management roles, facilitating meetings within the team regularly, understanding business requirements, analyzing different approaches, and planning deliverables and milestones for projects are also part of your responsibilities. Optimization, maintenance, and support of pipelines, strong analytical and logical skills, and the ability to tackle new challenges comfortably and learn are essential qualities for this role. The ideal candidate should have 4 to 7 years of relevant experience. Must-have skills for this position include Scala/Java/Kotlin, Spark, SQL (Intermediate to advanced level), Spark Streaming, any cloud platform (AWS preferred), Kafka/Kinesis/any streaming services, Object-Oriented Programming, Hive, and ETL/ELT design experience, as well as CICD experience for ETL pipeline deployment. Good-to-have skills include proficiency in Git or similar version control tools, knowledge in CI/CD, and Microservices.,
We are looking for a talented and motivated Consultant Data Analyst to join our dynamic team. You should have a strong background in data analysis, excellent problem-solving skills, and the ability to effectively communicate complex findings to diverse stakeholders. As a Consultant Data Analyst, your main responsibility will be to provide actionable insights and strategic recommendations to our clients, assisting them in unleashing the full potential of their data. With 4-7 years of experience, you will have complete accountability for delivering 1-2 projects from conception to implementation. This includes managing a team of Associates and Senior Associates, conducting insightful client interviews, and ensuring project timing, client expectations, and deadlines are met efficiently. Your role will also involve creating impactful PowerPoint presentations, actively participating in business development and organization building activities, and presenting final results to clients while exploring further opportunities. Key Responsibilities: - Lead 1-2 projects from start to finish - Manage a team of associates and senior associates - Conduct client interviews to gather requirements - Ensure project timelines and deadlines are met - Create compelling PowerPoint presentations - Contribute to business development and organization building - Present final results to clients and discuss potential opportunities - Plan project deliverables and milestones - Provide business analysis and assessment - Facilitate team meetings regularly - Track and report team hours Technical Competencies Required: - Strong proficiency in SQL - Experience with Power BI - Advanced skills in MS Excel - Ability to generate ad-hoc insights - Proficiency in MS PowerPoint If you are passionate about data analysis, possess the required technical competencies, and are eager to take on challenging projects in a dynamic environment, we encourage you to apply for this exciting opportunity.,
Experience - 5 - 8 years Must Have Skills: Azure Databricks Azure Data Factory PySpark Spark - SQL ADLS Responsibilities: Design and build data pipelines using Spark-SQL and PySpark in Azure Databricks Design and build ETL pipelines using ADF Build and maintain a Lakehouse architecture in ADLS / Databricks. Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Work with DevOps team to deploy solutions in production environments. Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors. Participate as a full member of the global Analytics team, providing solutions for and insights into data related items. Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members. Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations.
Experience: 5 to 9 years Must have Skills: Kotlin/Scala/Java Spark SQL Spark Streaming Any cloud (AWS preferable) Kafka /Kinesis/Any streaming services Object-Oriented Programming Hive, ETL/ELT design experience CICD experience (ETL pipeline deployment) Data Modeling experience Good to Have Skills: Git/similar version control tool Knowledge in CI/CD, Microservices Role Objective: Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products Roles & Responsibilities: Sound knowledge in Spark architecture and distributed computing and Spark streaming. Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning. Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique. Good in functional programming and OOPS concept on Kotlin/Scala/Java Good experience in SQL Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project. Able to mentor new members for onboarding to the project. Understand the client requirement and able to design, develop from scratch and deliver. AWS cloud experience would be preferable. Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred) Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements. Managing project timing, client expectations and meeting deadlines. Should have played project and team management roles. Facilitate meetings within the team on regular basis. Understand business requirement and analyze different approaches and plan deliverables and milestones for the project. Optimization, maintenance, and support of pipelines. Strong analytical and logical skills. Ability to comfortably tackling new challenges and learn
Responsibilities Lead and participate in the development of high-quality software solutions for client projects, using modern programming languages and frameworks. Contribute to system architecture and technical design decisions, ensuring that solutions are scalable, secure, and meet client requirements. Work closely with clients to understand their technical needs and business objectives, offering expert advice on software solutions and best practices. Provide guidance and mentorship to junior developers, assisting with code reviews, troubleshooting, and fostering a culture of technical excellence. Work with project managers, business analysts, and other engineers to ensure that technical milestones are achieved, and client expectations are met. Ensure the quality of software through testing, code optimization, and identifying potential issues before deployment. Stay up to date with industry trends, new technologies, and best practices to continuously improve development processes and software quality. Other duties as assigned and directed. Required Technical Skills Strong expertise in Python programming and development. Hands-on experience with Generative AI and LLMs (e.g. OpenAI, Anthropic, Google Gemini, Meta LLaMA). Familiarity with agentic AI frameworks and multi-agent orchestration patterns. Experience in building cloud-native solutions (Azure, AWS, or GCP) and integrating AI services. Knowledge of data pipelines, APIs, vector databases, and orchestration tools. Strong understanding of system design, security, and scalability principles. Excellent client-facing and communication skills with the ability to explain complex technical concepts to business leaders. Consulting experience is highly preferred. Work Location: India - Remote Shift Timings: 2:00 pm IST to 11:00 pm IST
Responsibilities Lead and participate in the development of high-quality software solutions for client projects, using modern programming languages and frameworks. Contribute to system architecture and technical design decisions, ensuring that solutions are scalable, secure, and meet client requirements. Work closely with clients to understand their technical needs and business objectives, offering expert advice on software solutions and best practices. Provide guidance and mentorship to junior developers, assisting with code reviews, troubleshooting, and fostering a culture of technical excellence. Work with project managers, business analysts, and other engineers to ensure that technical milestones are achieved, and client expectations are met. Ensure the quality of software through testing, code optimization, and identifying potential issues before deployment. Stay up to date with industry trends, new technologies, and best practices to continuously improve development processes and software quality. Other duties as assigned and directed. Required Technical Skills Your data engineering experience should include Azure technologies and be familiar with modern data platform technologies such as Azure Data Factory, Azure Databricks, Azure Synapse, Fabric Understanding of Agile engineering practices Deep familiarity and experience in the following areas: Data warehouse and Lakehouse methodologies, including medallion architecture; Data ETL/ELT processes; Data profiling and anomaly detection; Data modeling (Dimensional/Kimball); SQL Strong background in relational database platforms DevOps/Continuous integration & continuous delivery Work Location: India - Remote Shift Timings: 2:00 pm IST to 11:00 pm IST
Responsibilities Lead and participate in the development of high-quality software solutions for client projects, using modern programming languages and frameworks. Contribute to system architecture and technical design decisions, ensuring that solutions are scalable, secure, and meet client requirements. Work closely with clients to understand their technical needs and business objectives, offering expert advice on software solutions and best practices. Provide guidance and mentorship to junior developers, assisting with code reviews, troubleshooting, and fostering a culture of technical excellence. Work with project managers, business analysts, and other engineers to ensure that technical milestones are achieved, and client expectations are met. Ensure the quality of software through testing, code optimization, and identifying potential issues before deployment. Stay up to date with industry trends, new technologies, and best practices to continuously improve development processes and software quality. Other duties as assigned and directed. Required Technical Skills Strong proficiency in C# and at least one other programming language like JavaScript and Python. Experience with modern web frameworks (e.g., React, Angular, Node.js) and backend technologies (e.g., Spring, Django). Familiarity with relational and non-relational databases (e.g., MYSQL, Azure SQL, MongoDB). Experience deploying applications on cloud services such as AWS, Azure, or Google Cloud. Understanding of DevOps practices and tools, including CI/CD pipelines, version control (Git), and containerization (Docker). Work Location: India - Remote Shift Timings: 2:00 pm IST to 11:00 pm IST
Role Objective Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products Roles & Responsibilities Sound knowledge in Spark architecture and distributed computing and Spark streaming. Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning. Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique. Good in functional programming and OOPS concept on Kotlin/Scala/Java Good experience in SQL Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project. Able to mentor new members for onboarding to the project. Understand the client requirement and able to design, develop from scratch and deliver. AWS cloud experience would be preferable. Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred) Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements. Managing project timing, client expectations and meeting deadlines. Should have played project and team management roles. Facilitate meetings within the team on regular basis. Understand business requirement and analyze different approaches and plan deliverables and milestones for the project. Optimization, maintenance, and support of pipelines. Strong analytical and logical skills. Ability to comfortably tackling new challenges and learn Experience: 5 to 9 years Must Have Skills Kotlin/Scala/Java Spark SQL Spark Streaming Any cloud (AWS preferable) Kafka /Kinesis/Any streaming services Object-Oriented Programming Hive, ETL/ELT design experience CICD experience (ETL pipeline deployment) Data Modeling experience Good To Have Skills Git/similar version control tool Knowledge in CI/CD, Microservices
You will collaborate closely with data teams and developers to comprehend user needs, business objectives, and dashboard prerequisites. Your primary responsibilities will involve redesigning and improving the user interface of BI dashboards with a strong emphasis on usability, accessibility, and adhering to design best practices. As part of your role, you will be responsible for developing wireframes, mockups, and interactive prototypes for dashboard layouts. Additionally, you will define and uphold a consistent visual style guide encompassing color palettes, typography, iconography, and layout standards for BI reports. Working in tandem with BI developers, you will ensure that the design vision is accurately executed within the chosen BI tools. It will be essential for you to stay abreast of UI/UX trends and design innovations to introduce fresh ideas and concepts. Furthermore, you should possess the ability to effectively communicate design concepts to technical and non-technical stakeholders. Proficiency in creating dashboards using tools such as PPT and Photoshop, along with the capability to translate intricate ideas into clear and concise visual content, will be crucial for this role. Your experience in UI/UX design tools like Figma and Canva will be valuable in successfully fulfilling the responsibilities of this position.,
As a Solution Specialist at Affine, you will collaborate with various teams including sales, delivery, and marketing to present Affine's offerings to clients and partners. This role provides you with the opportunity to work on diverse business challenges and projects as part of the business team. Working closely with senior management, including founders and VPs, you will be involved in defining solution strategies and engagement workflows. We are looking for individuals with a blend of technical expertise and business acumen who are eager to enhance their skills continuously. Your responsibilities will include: - Engaging with sales teams, clients, and partners to grasp solution requirements and translating business queries into analytical solutions. - Developing solution proposals and responding to requests for proposals (RFPs). - Participating in sales meetings as a subject matter expert (SME) and showcasing Affine's thought leadership through storytelling. - Leading and guiding a team of Solution Analysts to achieve expected SLAs. - Staying ahead of industry trends to assist in the formulation of strategic plans for the organization. - Researching the latest advancements in relevant analytical technologies and sharing insights within the company. - Effectively communicating with clients, partners, and internal teams across different levels and geographies. - Engaging with both technical and non-technical stakeholders to facilitate discussions efficiently. Requirements: - Minimum of 5+ years of relevant experience in the analytics industry. - Extensive experience in architecting, designing, and deploying data engineering technologies such as Big Data technologies (e.g., Hadoop, Spark) and cloud platforms. - Proficiency in Data Warehouse and Data Lake solutions. - Knowledge of machine learning and advanced AI concepts is a plus. - Strong technical writing skills to produce necessary documentation. - Ability to identify and contribute to projects aimed at enhancing services, customer satisfaction, profitability, and team efficiency. - Customer-focused mindset with the ability to handle escalations and balance customer needs with internal team requirements. - Experience in translating business inquiries into analytical solutions. - Proficiency in MS PowerPoint and Excel. - Bachelor's (B.Tech.) or master's degree in engineering or other quantitative disciplines.,