Jobs
Interviews

1769 Data Architecture Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

12 - 16 Lacs

Gurugram

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Specialist & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decisionmaking and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Requirements Total Experience 58 years with 4+ years of relevant experience Skills o Proficiency on Databricks platform o Strong handson experience with Pyspark, SQL, and Python o Any cloud Azure, AWS, GCP Certifications (Any of the following) Databricks Certified Associate Developer for Spark 3.0 Preferred Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Professional Mandatory skill sets Data Engineering Preferred skill sets Data Engineering Years of experience required 01 years Education qualification BE Education Degrees/Field of Study required Bachelor of Engineering Degrees/Field of Study preferred Required Skills Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Strategy {+ 22 more} Travel Requirements Available for Work Visa Sponsorship

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

This position provides input, support, and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She participates in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. This position provides input to applications development project plans and integrations. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications development, integration, and maintenance. He/She provides input to department and project teams on decisions supporting projects. Responsibilities: Performs systems analysis and design. Designs and develops moderate to highly complex applications. Develops application documentation. Produces integration builds. Performs maintenance and support. Supports emerging technologies and products. Primary Skill: Modern web development, ASP.NET, Angular 2 Secondary Skill: Agile development, Automated Testing Qualifications: Bachelor s Degree or International equivalent Bachelors Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred Skills: - Windows, Linux, Tech Support

Posted 1 week ago

Apply

4.0 - 7.0 years

8 - 12 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

This position provides input, support, and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She participates in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. This position provides input to applications development project plans and integrations. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications development, integration, and maintenance. He/She provides input to department and project teams on decisions supporting projects. Responsibilities: Performs systems analysis and design. Designs and develops moderate to highly complex applications. Develops application documentation. Produces integration builds. Performs maintenance and support. Supports emerging technologies and products. Qualifications: Bachelor s Degree or International equivalent Bachelors Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred

Posted 1 week ago

Apply

0.0 - 6.0 years

2 - 8 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

About the Team and the Role Our Data Platform team is seeking a Senior Data Engineer to play a pivotal role in establishing robust data governance, access controls, and a unified customer profile within our Snowflake environment. Youll be instrumental in designing and implementing scalable data solutions, ensuring data security and compliance across diverse business units. This isnt just a technical role; youll be a key influencer, shaping our company-wide data strategies and fostering a culture of innovation and operational excellence. Your expertise in Snowflake configuration, Looker integration, and data segregation will directly impact how we manage and leverage our data to drive business impact. Were looking for someone who stays ahead of industry trends and integrates best practices into our data management approach. What Youll Do Design and Implement Data Governance in Snowflake: Provision and configure Snowflake environments, including setting up roles, virtual warehouses, schemas, and connection policies to align with organizational access and performance requirements. Establish Robust Access Controls: Implement data classification and tagging frameworks to support granular access control, auditing, and reporting. Develop and enforce robust Role-Based Access Control (RBAC) models to ensure least-privilege principles across teams and business units. Configure Looker for Secure Reporting: Design and configure Lookers access model, including user roles, content restrictions, and data governance policies. Ensure Lookers structure aligns seamlessly with Snowflake permissions to provide secure and consistent reporting environments. Enforce Cross-Tenant Data Segregation: Define and implement secure data boundaries across business units, effectively segregating client and internal data between environments. Restrict access appropriately and minimize cross-tenant exposure risks. Automate Data Operations and Governance: Develop operational scripts and automation pipelines (e.g., using Python or SQL) to support recurring data tasks, enforce access rules, and streamline integration across Snowflake and downstream tools. Build a Unified Customer Profile: Contribute to the development of a comprehensive data model and services for our Unified Customer Profile, integrating financial data from multiple sources and systems while ensuring data quality and governance. Collaborate with Stakeholders: Work closely with data scientists, product managers, and business leaders to understand their data needs and translate them into scalable and secure data solutions. Drive Data-Driven Roadmaps: Develop roadmaps for data architecture capabilities that align with business objectives and overall technology strategy, influencing engineering, product, and cross-functional teams to identify data opportunities and drive impactful business decisions. Qualifications Bachelors degree in Computer Science, Computer Engineering, a relevant technical field, or equivalent practical experience with 3+ years of relevant experience . Proven expertise in Snowflake setup and configuration , including provisioning tenants, roles, warehouses, schemas, and connection policies. Strong experience with implementing data access controls and tagging frameworks within data warehousing solutions, specifically with RBAC models. Hands-on experience configuring Lookers access model , including user roles, content restrictions, and data governance policies, with an emphasis on aligning Looker with underlying data platform permissions. Demonstrated ability to define and enforce data boundaries and segregation across business units , particularly for client and internal data. Proficiency in developing automation scripts and pipelines (e.g., Python or SQL) for data operations, access rule enforcement, and integrations. Deep understanding of data modeling principles, including schema design and dimensional data modeling, with experience in writing efficient SQL queries. Experience with data processing, storage, and querying technologies (e.g., Hadoop, Spark, Flink Streaming, BigQuery, Dataflow, Airflow, Bigtable, Spanner, and Dynamo) and object-oriented programming languages (e.g., Python, Java, C++). Experience with semantic search (e.g., Vertex AI Search, AWS Cloud Search) and knowledge graphs (e.g., Neo4j) to integrate data with generative AI experiences. Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering. Excellent technical communication skills, with the ability to collaborate with cross-functional teams and drive technical innovation in data engineering. Preferred Qualifications Bachelors/Masters in a STEM field. Experience working with terabyte to petabyte scale data. Ability to thrive in fast-paced environments at a large-scale company.

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Asset & Wealth Management Engineering Across Wealth Management, Goldman Sachs helps empower clients and customers around the world to reach their financial goals. Our advisor-led wealth management businesses provide financial planning, investment management, banking and comprehensive advice to a wide range of clients, including ultra-high net worth and high net worth individuals, as well as family offices, foundations and endowments, and corporations and their employees. Our Asset Management business delivers innovative investment solutions through a global, multi-product platform and is one of the pre-eminent investment management organizations globally. Our consumer business provides digital solutions for customers to better spend, borrow, invest, and save. Our growth is driven by a relentless focus on our people, our clients and customers, and leading-edge technology, data and design. Who We Are The Asset & Wealth Management Data Office is a group within AWM whose primary responsibilities are to ensure that the data used for business and reporting purposes is well understood and of the highest quality available, implement the Firm s data governance policies, and expand and improve the strategic data architecture. We work with global stakeholders to provide transparency into where data enters the firm, how it is transformed / reported / classified , and what data quality controls exist for critical datasets. Our internal clients use this information to gain insights aiming to eliminate duplication, improve data quality, respond faster to new business opportunities, and to meet regulatory requirements. How You Will Fulfill Your Potential As a member of the team, you will gain satisfaction though adding value and contributing to the team s initiatives. You will: Develop communication and reference materials that enable data consumers and producers to improve data quality and implement the data governance policy Provide guidance and training on how to plan implementation of requirements set forth by the policy Provide data quality analytics that enable data consumers and producers to drive remediation efforts Work with data consumers and producers to negotiate ownership of data Create lineage graphs to show how data moves from point of entry to where it is used Define and create appropriate data validation controls Partner with users and provide feedback on the strategic tooling to engineering teams for business use cases Build consensus across senior stakeholders Partner closely with stakeholders to define and evolve firmwide data governance strategy Communicate progress to senior stakeholders and within the team Test and monitor data quality controls You will have the potential to Grow your understanding of data and the underlying businesses that use it Develop business, data analysis and relationship management skills Contribute to progressing the data strategy at Goldman Sachs Why join the team Interpersonal Communication: You ll engage with data producers and consumers across all areas of the business to understand their requirements and to propose solutions tailored to their needs. Autonomy: You ll have significant autonomy in designing and writing solutions to help our stakeholders deliver for the firm s clients. Creativity: You ll be encouraged to suggest improvements to products and to propose ways, in which we can add value for our stakeholders. Training: Your manager will support your professional development, allowing you time for training at work, helping you learn and grow within the organization, and providing opportunities for increasing responsibility. The Responsibilities and Requirements Experience working with stakeholders on projects to develop strategies and solutions, ideally related to data. Ability to work in a collaborative manner with stakeholders and drive consensus is essential. Experience working with a business team to develop functional requirements and translating those into technical requirements is important. Having developed an effective training and testing strategy is helpful. Responsibilities Play a central role in defining the strategic direction for Asset & Wealth Management in data initiatives Document Data Lineage from source to reporting Lead and participate in working groups to improve data quality and ease of access to information Help promote the data governance framework and drive adoption across all of the division Interface and coordinate with project team(s) to define objectives, develop approaches, create detailed schedules, provide status updates and prepare deliverables for projects Partner with stakeholders to ensure user tools meet for analyzing data meet user s needs Stakeholder management /sponsors and users of all levels Test and monitor data quality controls Basic Requirements Bachelor s degree 3+ years of relevant hands-on data governance, data quality, or data management experience Sufficient knowledge and the ability to run queries and participate in data analysis Proficiency in SQL, MS Excel, Data visualization tools, and Data Models Highly organized with exceptional communication, negotiation and influencing skills Relationship management: effectively partner with stakeholders with a focus on end-client value Knowledge of data warehousing and the development of physical and logical models Extremely proactive and works well in a collaborative environment Exceptional attention to detail and analytical thinking Ability to effectively communicate and present results highlighting the broader strategic impact Preferred Qualifications Experience of data governance or data management Prior experience in financial services industry Advanced proficiency in SQL, MS Excel, Data visualization tools, and Data Models May have a software engineering background (not required) Goldman Sachs Engineering Culture .

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

We are seeking a highly skilled Ontology Expert & Knowledge Graph Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring accurate representation and integration of complex data sets. You will leverage industry best practices to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs that drive data-driven decision-making and innovation within the company. Job Purpose: The role of Ontology & Knowledge Graph / Data Engineer is to design, develop, implement, and maintain enterprise ontologies in support of Organizations Data Driven Digitalization strategy. This role combines architecture ownership with hands-on engineering: you will model ontologies, stand up graph infrastructure, build semantic pipelines, and expose graph services that power search, recommendations, analytics, and GenAI solutions for our organization. Seeking highly skilled motivated expertise to drive the development and shape the future of enterprise AI by designing and implementing large-scale ontologies and knowledge graph solutions. You ll work closely with internal engineering and AI teams to build scalable data models that enable advanced reasoning, semantic search, and agentic AI workflows. Key Responsibilities: 1.Ontology Development: Design and apply ontology principles to improve semantic reasoning and data integration, ensuring alignment with business requirements and industry standards. Collaborate with domain experts, product managers and customers to capture and formalize domain knowledge into ontological structures and vocabularies & improve data discoverability. Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Integrate Semantic Data Models with existing data infrastructure and applications 2.Knowledge Graph Implementation & Data Integration: Design and build knowledge graphs based on ontologies. Create\Build Knowledge Graph based on the data from multiple sources while ensuring data integrity and data consistency. Collaborate with data engineers for data ingestion and ensure smooth integration of data from multiple sources Administer and maintain graph database solutions, including both Semantic and Property Graphs Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. 3.Data Quality and Governance: Ensure the quality, accuracy, and consistency of ontologies, and knowledge graphs. Define and implement data governance processes and standards for ontology development and maintenance. 4.Collaboration And Communication: Collaborate with internal engineering teams to align data architecture with Gen AI capabilities Leverage on AI techniques by aligning knowledge models with RAG pipelines and agent orchestration Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. Research and Innovation: Stay up to date with the latest advancements in the field of NLP, LLM and machine learning and proactively identify opportunities to leverage new technologies for improved solutions. Experience: 4 6 years of industrial experience in AI [OR] Data Science [OR] Data Engineering. 2 3 years of hands-on experience building ontologies and knowledge systems. Proficiency with graph databases such as Neo4j, GraphDB [RDF based]. Understanding of semantic standards like OWL, RDF, W3C and property graph approaches. Familiarity with Gen AI concepts including retrieval-augmented generation and agent-based AI. Required Knowledge/Skills, Education, and Experience: Bachelor s or master s degree in computer science, Data Science, Artificial Intelligence, or a related field, or a specialization in natural language processing is preferred. Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. Proficiency in Python and other programming languages used for data engineering. Experience with NLP and GEN AI based Frameworks [Langchain, Langgraph] Good working project experience in cloud computing i.e., AWS/ Azure/GCP cloud Services including VPCs, EBS, ALBs, NLBs, EC2, S3, and so on so forth.

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Hyderabad

Work from Office

Overview At Prolifics, we are currently implementing multiple solutions in Software Development, and we are looking to hire talented Snowflake Architect for our development centre in India. This position would be based out of Hyderabad and is a permanent position. If you are looking for a high growth company with rock-solid stability, if you thrive in the energetic atmosphere of high-profile projects, we want to talk to you today! Let\u2019s connect and explore possibilities of having you onboard the Prolifics team! Job Title: Snowflake Architect Primary skills: experience in data architecture, with at least 4 years focused on Snowflake Secondary skills: Strong expertise in Snowflake design, data modeling, and best practices for cloud data platforms Location: Hyderabad (Mindspace#12B) Educational Qualification: B.Tech/BE/M.Tech/MCA/M.Sc Experience: 8+ Job Description: Key Responsibilities: Design and implement Snowflake data architectures, ensuring scalable, secure, and optimized data solutions. Develop data models, schemas, and ETL workflows to support business intelligence and analytics requirements. Work with cross-functional teams (Data Engineers, Data Scientists, Business Analysts) to understand requirements and translate them into Snowflake-based solutions. Optimize Snowflake performance through best practices for query tuning, storage management, and cost optimization. Lead the migration of legacy data solutions to Snowflake, including data integration and transformation processes. Design and implement data security measures in Snowflake to meet compliance and governance standards. Provide leadership and mentorship to junior architects and engineers, ensuring best practices are followed. Collaborate with business stakeholders to ensure data architecture aligns with organizational goals and requirements. Required Skills: 8+ years of experience in data architecture, with at least 4 years focused on Snowflake. Strong expertise in Snowflake design, data modeling, and best practices for cloud data platforms. Hands-on experience with SQL, ETL processes, and cloud-based data integration (AWS, Azure, GCP). Proficiency in optimizing data pipelines, query performance, and managing data warehousing solutions in Snowflake. Experience with cloud technologies and integrating Snowflake with other data tools like Power BI, Tableau, or Looker. Familiarity with data governance, security, and compliance standards in cloud environments. Strong project management and communication skills for leading teams and interacting with clients. Ability to manage end-to-end solution delivery and ensure alignment with business objectives. About us: Prolifics Corporation Limited is a Global Technology Solutions Provider with presence across North America (USA and Canada), Europe (UK and Germany), Middle East & Asia. In India, we have offshore development centres: 2 in Hyderabad & 1 in Pune. For more than 40 years, Prolifics has transformed enterprises of all sizes including over 100 Fortune 1000 companies by solving their complex IT challenges. Our clients include Fortune 50 and Fortune 100 companies across a broad range of industries including Financial Services, Insurance, Government, Healthcare, Telecommunications, Manufacturing and Retail. We rank consistently in Dream Companies to Work for and Dream Employer of the Year ranking from World HRD Congress, ranked 7 in 2019. We encourage you to visit us on www.prolifics.com or follow us on Twitter, LinkedIn, Facebook, YouTube and other social media to know more about us. At Prolifics, we are currently implementing multiple solutions in Software Development, and we are looking to hire talented Snowflake Architect for our development centre in India. This position would be based out of Hyderabad and is a permanent position.

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Job Description We are seeking a Snowflake Engineer to join our dynamic Data Engineering team. This role will involve owning the architecture, implementation, and optimization of our Snowflake-based data warehouse solutions while mentoring a team of engineers and driving project success. The ideal candidate will bring deep technical expertise in Snowflake, hands-on experience with DBT (Data Build Tool), and a collaborative mindset for working across data, analytics, and business teams. Key Responsibilities: Design and implement scalable and efficient Snowflake data warehouse architectures and ELT pipelines. Leverage DBT to build and manage data transformation workflows within Snowflake. Lead data modeling efforts to support analytics and reporting needs across the organization. Optimize Snowflake performance including query tuning, resource scaling, and storage usage. Collaborate with business stakeholders and data analysts to gather requirements and deliver high-quality data solutions. Manage and mentor a team of data engineers; provide technical guidance, code reviews, and career development support. Establish and enforce best practices for data engineering, including version control, CI/CD, documentation, and data quality. Ensure data solutions are secure, compliant, and aligned with privacy regulations (e.g., GDPR, CCPA). Qualifications Bachelor s degree in Computer Science, Information Technology, or related field. 3+ years of experience in data engineering, including at least 2+ years of hands-on experience with

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Hyderabad

Work from Office

We are seeking a Snowflake Engineer to join our dynamic Data Engineering team. This role will involve owning the architecture, implementation, and optimization of our Snowflake-based data warehouse solutions while mentoring a team of engineers and driving project success. The ideal candidate will bring deep technical expertise in Snowflake, hands-on experience with DBT (Data Build Tool), and a collaborative mindset for working across data, analytics, and business teams. Key Responsibilities: Design and implement scalable and efficient Snowflake data warehouse architectures and ELT pipelines. Leverage DBT to build and manage data transformation workflows within Snowflake. Lead data modeling efforts to support analytics and reporting needs across the organization. Optimize Snowflake performance including query tuning, resource scaling, and storage usage. Collaborate with business stakeholders and data analysts to gather requirements and deliver high-quality data solutions. Manage and mentor a team of data engineers; provide technical guidance, code reviews, and career development support. Establish and enforce best practices for data engineering, including version control, CI/CD, documentation, and data quality. Ensure data solutions are secure, compliant, and aligned with privacy regulations (e.g., GDPR, CCPA). Bachelor s degree in Computer Science, Information Technology, or related field. 3+ years of experience in data engineering, including at least 2+ years of hands-on experience with Snowflake.

Posted 1 week ago

Apply

8.0 - 14.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Architect is a senior-level position responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Architect drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. This role will manage a team of Data Modelers. Roles & Responsibilities: Provide oversight to data modeling team members. Develop and maintain conceptual logical, and physical data models and to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Evaluate emerging technologies and assess their potential impact. Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: [GCF Level 6A] Doctorate Degree and 2 years of experience in Computer Science, IT or related field OR Master s degree with 8 - 10 years of experience in Computer Science, IT or related field OR Bachelor s degree with 10 - 14 years of experience in Computer Science, IT or related field OR Diploma with 14 - 18 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills : Data Modeling: Expert in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Information Governance: Familiarity with policies and procedures for managing information assets, including security, privacy, and compliance. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 1 week ago

Apply

15.0 - 20.0 years

50 - 60 Lacs

Bengaluru

Work from Office

Role Overview:We are looking for a strategic and hands-on Senior Director/AVP Data & AI to lead the vision, design, and execution of LeadSquared s data platform and AI roadmap. This leader will be responsible for building scalable infrastructure, driving productized AI capabilities, and empowering teams with data insights that fuel decisions and automation across the company and platform. This role will work closely with product, engineering, customer success, and GTM teams to transform how data and AI power our products and internal decisions Key Responsibilities: Data Platform & Infrastructure Define and execute the strategy for LeadSquared s next-generation enterprise data platform to support reporting, analytics, and ML use cases Own end-to-end data engineering: real-time pipelines, data lakes, transformation workflows (ClickHouse, Spark, Kafka, Airflow) Improve observability, quality, and governance of customer data at scale across tenants and domains AI & ML Strategy Lead the development of ML/AI features in LeadSquared products e.g, lead scoring, predictive workflows, agent assist, and generative AI features Build a scalable MLOps framework to take models from experimentation to production seamlessly Ensure responsible, explainable, and secure use of AI aligned with regulatory and ethical standards Analytics & BI Enablement Partner with business and product teams to deliver dashboards, insights, and self-serve data capabilities Define and drive KPIs for product usage, GTM success, customer health, and internal efficiency Leadership & Collaboration Build and lead a cross-functional team of data engineers, ML engineers, and analysts Collaborate with engineering to embed data capabilities into the core platform architecture Evangelize a data-driven culture across LeadSquared from leadership to product teams to customer-facing roles Ideal Profile: 15+ years of experience in data/AI roles with at least 5 years in senior leadership Proven experience leading data platforms and AI productization in a B2B SaaS environment Strong understanding of multi-tenant SaaS, customer engagement data, and large-scale analytics systems Deep expertise in data architecture, streaming systems, warehousing, and AI/ML lifecycle management Experience with modern data stack: ClickHouse (preferred), Snowflake/BigQuery, dbt, Kafka, Python, Spark, Airflow, etc Strong business acumen and ability to work cross-functionally with product, business, marketing, and sales Nice to Have: Experience with AI in CRM or sales automation platforms (eg, lead intelligence, conversation insights) Exposure to personalization engines, customer data platforms (CDP), or predictive customer engagement

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 20 Lacs

Pune

Work from Office

8+ years of experience in ETL/DW projects, having migration experience and team management having delivery experience. Proven expertise in Snowflake data warehousing, ETL, and data governance. Experience with cloud ETL/ETL migration tools.

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 20 Lacs

Pune, Bengaluru

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone could grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage : JD Data modeler requirement The business architecture team supports the delivery of key front office to back-office (F2B) transformation priorities of the management board. The data architecture team plays the central role of defining data model that will align the business processes and ensure data lineage, effective control and implement client strategy and reporting solutions. This will require building strong relationships with key stakeholders and help delivering tangible value. Key responsibilities- - Define and manage data models used to implement solutions to automate business processes and controls. - Ensure the models follow banks data modelling standards and principles and influence them as necessary. - Active partner with various functional leads % teams to socialize the data models towards adoption and execution of front-to-back solutions Skills and experience- - 8 + years in financial services, preferably strategy and solutions in Corporate and investment Banking domain. - Strong knowledge of transaction banking domain process and controls for banking & trading business to drive conversation with business SMEs. Experience in developing models for transacting banking products is preferable. - Knowledge of loans or cash/Deposits lifecycle and or customer lifecycle and related business data required to manage operations and analytics are desirable. - Well-developed business requirements analysis skills, including good communication abilities (both spoken and listening) and stakeholder management -all levels.

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Job Summary: We are looking for a talented Data Engineer cum Database Developer with a strong background in the banking sector. The ideal candidate will have experience with SQL Server, AWS PostgreSQL, AWS Glue, and ETL tools, along with expertise in data ingestion frameworks and Control-M scheduling Key Responsibilities: Design, develop, and maintain scalable data pipelines to support data ingestion and transformation processes. Collaborate with cross-functional teams to gather requirements and implement solutions tailored to banking applications. Utilize SQL Server and AWS PostgreSQL for database development, optimization, and management. Implement data ingestion frameworks to ensure efficient and reliable data flow. Develop and maintain ETL processes using AWS Glue / other ETL tool, Control-M for scheduling Ensure data quality and integrity through validation and testing processes. Monitor and optimize system performance to support business analytics and reporting needs. Document data architecture, processes, and workflows for reference and compliance purposes. Stay updated on industry trends and best practices related to data engineering and management. Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 4+ years of experience in data engineering and database development, preferably in the banking sector. Proficiency in SQL Server and AWS PostgreSQL. Experience with Databricks/ AWS Glue or any other ETL tools (e.g., Informatica, ADF). Strong understanding of data ingestion frameworks and methodologies. Excellent problem-solving skills and attention to detail. Knowledge of Securitization in the banking industry would be plus Strong communication skills for effective collaboration with stakeholders. Familiarity with cloud-based data architectures and services. Experience with data warehousing concepts and practices. Knowledge of data privacy and security regulations in banking.

Posted 1 week ago

Apply

8.0 - 10.0 years

13 - 18 Lacs

Pune, Bengaluru

Work from Office

The purpose of this role is to provide technical guidance and suggest improvements in development processes. Develop required software features, achieving timely delivery in compliance with the performance and quality standards of the company. Job Description: Key Responsibilities Experience: 8-10 Years Lead the design and implementation of complex data solutions with a business-centric approach. Guide junior developers and provide technical mentorship. Ensure alignment of data architecture with marketing and business strategies. Work within Agile development teams, contributing to sprints and ceremonies. Design and implement CI/CD pipelines to support automated deployments and testing. Apply data engineering best practices to ensure scalable, maintainable codebases. Develop robust data pipelines and solutions using Python and SQL. Understand and manipulate business data to support marketing and audience targeting efforts. Collaborate with cross-functional teams to deliver data solutions that meet business needs. Communicate effectively with stakeholders to gather requirements and present solutions. Follow best practices for data processing and coding standards. Skills Proficient in Python for data manipulation and automation. Strong experience with SQL development (knowledge of MS SQL is a plus). Excellent written and oral communication skills. Deep understanding of business data, especially as it relates to marketing and audience targeting. Experience with Agile methodologies and CI/CD processes Experience with MS SQL. Familiarity with SAS. Good to have B2B and AWS knowledge Nice to have Hands-on experience with orchestration and automation tools such as Snowflake Tasks and Streams. Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 week ago

Apply

8.0 - 10.0 years

13 - 18 Lacs

Pune, Bengaluru

Work from Office

The purpose of this role is to provide technical guidance and suggest improvements in development processes. Develop required software features, achieving timely delivery in compliance with the performance and quality standards of the company. Job Description: Key Responsibilities Experience: 8-10 Years Lead the design and implementation of complex data solutions with a business-centric approach. Guide junior developers and provide technical mentorship. Ensure alignment of data architecture with marketing and business strategies. Work within Agile development teams, contributing to sprints and ceremonies. Design and implement CI/CD pipelines to support automated deployments and testing. Apply data engineering best practices to ensure scalable, maintainable codebases. Develop robust data pipelines and solutions using Python and SQL. Understand and manipulate business data to support marketing and audience targeting efforts. Collaborate with cross-functional teams to deliver data solutions that meet business needs. Communicate effectively with stakeholders to gather requirements and present solutions. Follow best practices for data processing and coding standards. Skills Proficient in Python for data manipulation and automation. Strong experience with SQL development (knowledge of MS SQL is a plus). Excellent written and oral communication skills. Deep understanding of business data, especially as it relates to marketing and audience targeting. Experience with Agile methodologies and CI/CD processes Experience with MS SQL. Familiarity with SAS. Good to have B2B and AWS knowledge Nice to have Hands-on experience with orchestration and automation tools such as Snowflake Tasks and Streams. Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 week ago

Apply

7.0 - 9.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Educational Requirements Bachelor of Engineering,Bachelor Of Technology Service Line Enterprise Package Application Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem-solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Location of posting - Infosys Ltd. is committed to ensuring you have the best experience throughout your journey with us. We currently have open positions in a number of locations across India - Bangalore, Pune, Hyderabad, Chennai, Chandigarh, Mysore, Kolkata Trivandrum, Indore, Nagpur, Mangalore, Noida, Bhubaneswar, Coimbatore, Jaipur, Hubli, Vizag. While we work in accordance with business requirements, we shall strive to offer you the location of your choice, where possible. Technical and Professional Requirements: Must have bachelor's or equivalent degree with a minimum of 7 years of experience Should adhere to Data Architecture, Modelling and Coding guidelines Should understand functional requirements Preparation of Design documents and/or Technical Documents Should have Experience in: HANA Modelling - Calculation views, Stored procedures, Scalar & Table functions, Performance tuning techniques. XS Development - XSO Data, XSJS Services, Debugging. DS - Job development end to end including Transformation, DS Scripting, Consume External services. Mandatory Skills – SAP Native HANA, Implementation, Configuration, Safe Agile Methodology. Preferred Skills: SAP HANA XS / Native HANA SAP HANA XS / Native HANA-SAP HANA XS / Native HANA

Posted 1 week ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Kochi

Work from Office

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4+ years of experience in data modelling, data architecture. Proficiency in data modelling tools ERwin, IBM Infosphere Data Architect and database management systems Familiarity with different data models like relational, dimensional and NoSQl databases. Understanding of business processes and how data supports business decision making. Strong understanding of database design principles, data warehousing concepts, and data governance practices Preferred technical and professional experience Excellent analytical and problem-solving skills with a keen attention to detail. Ability to work collaboratively in a team environment and manage multiple projects simultaneously. Knowledge of programming languages such as SQL

Posted 1 week ago

Apply

3.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 week ago

Apply

2.0 - 5.0 years

9 - 13 Lacs

Gurugram

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 week ago

Apply

2.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 week ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Bengaluru

Work from Office

As a Senior Backend /Lead Development Engineer you will be involved in developing automation solutions to provision and manage any infrastructure across your organization. Being developer you will be leveraging capabilities of Terrafrom and Cloud offering to drive infrastructure as code capabilities for the IBM z/OS platform. You will Work closely with frontend engineers as part of a full-stack team, collaborate with Product, Design, and other cross-functional partners to deliver high-quality solutions. Maintain high standards of software quality within the team by establishing good practices and habits. Focus on growing capabilities to support an enhancing the experience of the offering. Required education Bachelor's Degree Required technical and professional expertise * 15+ years of Software development experience with zOS or zOS Sub-systems. * System programmer able to work and support development/testing of IBM Z HW I/O definitions - IODF and IOCDS generation and deployment. * Familiar with HMC and HCD. * 8+ years Professional experience developing with Golang, Python and Ruby * 10+ year of hands-on experience with z/OS system programming or administration experience * Experience with Terraform key features like Infrastructure as a code, change automation, auto scaling. * Experience working with cloud provider such as AWS, Azure or GCP, with a focus on scalability, resilience and security. * Cloud-native mindset and solid understanding of DevOps principles in a cloud environment * Familiarity with cloud monitoring tools to implement robust observability practices that prioritize metrics, logging and tracing for high reliability and performance. * Extensive experience with cloud computing platforms (AWS, Azure, GCP) and infrastructure as code (Terraform). * Strong interest in customer-focused work, with experience collaborating with Design and Product Management functions to deliver impactful solutions. * Demonstrated ability to tackle complex technical challenges and deliver innovative solutions. * Excellent communication and collaboration skills, with a focus on customer satisfaction and team success. * Strong analytical, debugging and problem solving skills to analyze issues and defects reported by customer-facing and test teams. * Proficient in source control management tools (GitHub, ) and with Agile Life Cycle Management tools. * Soft Skills: Strong communication, collaboration, self-organization, self-study, and the ability to accept and respond constructively to critical feedback.

Posted 1 week ago

Apply

2.0 - 5.0 years

6 - 11 Lacs

Bengaluru

Work from Office

HashiCorp, and IBM Company (HashiCorp) solves development, operations, and security challenges in infrastructure so organizations can focus on business-critical tasks. We build products to give organizations a consistent way to manage their move to cloud-based IT infrastructures for running their applications. Our products enable companies large and small to mix and match AWS, Microsoft Azure, Google Cloud, and other clouds as well as on-premises environments, easing their ability to deliver new applications. At HashiCorp, we have used the Tao of HashiCorp as our guiding principles for product development and operate according to a strong set of company principles for how we interact with each other. We value top-notch collaboration and communication skills, both among internal teams and in how we interact with our users. The Role As a Frontend Engineer II on the Boundary Transparent Session team at HashiCorp, you will be instrumental in expanding enterprise functionality that allows a VPN-like passive connection experience for customers. This role plays a critical part in ensuring the Boundary Desktop Client supports daily customer workflows in a performant, scalable way. You will be part of a full-stack team including backend and mobile engineers, and collaborate cross-functionally with Product, Design, and other partners. Key Responsibilities Develop and enhance frontend features that provide a VPN-like passive connection experience for customers. Ensure the Boundary Desktop Client supports daily customer workflows in a performant and scalable manner. Work closely with backend and mobile engineers as part of a full-stack team, and collaborate with Product, Design, and other cross-functional partners to deliver high-quality solutions. Maintain high standards of software quality within the team by establishing good practices and habits. Focus on growing capabilities to support an enhanced user experience Required education Bachelor's Degree Required technical and professional expertise 4+ years of software engineering experience with a proven track record of technical or engineering lead roles. Experience with diverse technology stacks and project types is preferred. Proficiency in JavaScript is necessary. Experience with the Ember framework is preferred, or a strong interest and ability to get up to speed with it. Extensive experience with cloud computing platforms (AWS, Azure, GCP) and infrastructure as code (Terraform). Strong interest in customer-focused work, with experience collaborating with Design and Product Management functions to deliver impactful solutions. Demonstrated ability to tackle complex technical challenges and deliver innovative solutions. Preferred technical and professional experience Excellent communication and collaboration skills, with a focus on customer satisfaction and team success. Proven ability to lead by example, mentor junior engineers, and contribute to a positive team culture. Commitment to developing well-tested solutions to ensure high reliability and performance.

Posted 1 week ago

Apply

4.0 - 6.0 years

20 - 30 Lacs

Gurugram

Work from Office

Key Skills: Spark, Scala, Flink, Big Data, Structured Streaming, Data Architecture, Data Modeling, NoSQL, AWS, Azure, GCP, JVM tuning, Performance Optimization. Roles & Responsibilities: Design and build robust data architectures for large-scale data processing. Develop and maintain data models and database designs. Work on stream processing engines like Spark Structured Streaming and Flink. Perform analytical processing on Big Data using Spark. Administer, configure, monitor, and tune performance of Spark workloads and distributed JVM-based systems. Lead and support cloud deployments across AWS, Azure, or Google Cloud Platform. Manage and deploy Big Data technologies such as Business Data Lakes and NoSQL databases. Experience Requirements: Extensive experience working with large data sets and Big Data technologies. 4-6 years of hands-on experience in Spark/Big Data tech stack. At least 4 years of experience in Scala. At least 2+ years of experience in cloud deployment (AWS, Azure, or GCP). Successfully completed at least 2 product deployments involving Big Data technologies. Education: B.Tech M.Tech (Dual), B.Tech.

Posted 1 week ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Job Description- 5-7 years of experience in SQL development and relational database management. Excellent communication and collaboration skills. Design, develop, and optimize complex SQL scripts for data transformation, cleansing, and aggregation within Snowflake. Build and maintain standardized and dynamic views in Snowflake to support reporting, dashboarding, and data exploration use cases. ' Work closely with BI teams, data analysts, and business stakeholders to understand data requirements and translate them into performant SQL logic. Ensure data accuracy, consistency, and performance across different layers (raw, staging, curated). Monitor query performance and optimize SQL queries and Snowflake structures (e.g., clustering, caching, resource usage). Participate in data quality validation and implement appropriate error handling and logging. Follow version control, deployment, and environment management practices in line with organizational standards. Key Responsibilities: Develop, optimize, and maintain complex SQL queries, stored procedures, views, and functions. Design and implement efficient data models and database objects to support applications and reporting needs. Collaborate with business analysts and developers to understand data requirements. Tune SQL queries and indexes to ensure high performance of large-scale datasets. Perform data profiling, validation, and cleansing activities to maintain data integrity. Support ad-hoc data requests and report development for internal teams. Create and maintain technical documentation for data architecture, ETL workflows, and query logic. Assist in database deployments, migrations, and version control as part of the release process

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies