Home
Jobs

Data Architect

12 - 16 years

3 - 4 Lacs

Posted:12 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Key Responsibilities: Monitor and maintain data pipeline reliability , including logging, alerting, and troubleshooting failures. Good knowledge on Artificial Intelligence and Machine learning Design, develop, and optimize relational and NoSQL databases for diverse applications, including AI and large-scale data processing. Build and manage ETL/ELT pipelines to ensure efficient data processing and transformation. Optimize database performance for high-availability applications, reporting, and analytics . Implement data partitioning, indexing, and sharding strategies for scalability. Ensure data integrity, governance, and security across multiple applications. Collaborate with teams to streamline data access, model storage, and training workflows when applicable. Develop SQL queries, stored procedures, and views for efficient data retrieval. Monitor and troubleshoot database performance, bottlenecks, and failures . Required Skills & Qualifications: Strong SQL expertise (writing complex queries, optimization, stored procedures, indexing). Experience with relational databases (PostgreSQL, SQL Server) and NoSQL databases (MongoDB, Redis). Knowledge of AI-related database optimizations , such as vector databases (e.g., Pinecone, FAISS, Weaviate) for embedding storage and retrieval is a plus. Experience working with enterprise data workflows , including data modeling and architecture. Dimensional Modeling / Data Warehousing : Experience with dimensional modeling (star/snowflake schemas) and data warehousing concepts (e.g., Kimball, Inmon). Metadata Management & Data Catalogs : Familiarity with metadata management, data catalogs, or data lineage tools (e.g., Alation, Data Catalog in GCP, AWS Glue Data Catalog). Hands-on experience with cloud platforms (AWS, Azure, GCP) and cloud-based data storage solutions. Familiarity with big data technologies (Spark, Hadoop, Kafka) is a plus. Strong Python or SQL scripting skills for data manipulation and automation. Knowledge of data security, privacy regulations (GDPR, CCPA), and compliance standards . Unit / Integration Testing : Experience with testing data pipelines, including unit and integration testing for transformations. Documentation : Strong documentation practices for pipelines, database schemas, and data governance processes. Excellent problem-solving skills and ability to collaborate with cross-functional teams . Experience with workflow orchestration tools like Apache Airflow or Prefect. Preferred Qualifications: Experience with vector databases and retrieval-augmented generation (RAG) workflows. Understanding of AI model storage, caching, and retrieval from databases when applicable. Experience in machine learning model feature engineering and ML model versioning . Experience with containerization technologies like Docker or Kubernetes for deploying data solutions. Data Quality and Observability Tools : Experience with tools or frameworks for data quality checks, validation, and data observability (e.g., Great Expectations, Monte Carlo, Databand). Role & responsibilities Preferred candidate profile

Mock Interview

Practice Video Interview with JobPe AI

Start Ai Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You