📄 Job Description: We’re looking for a driven Web Content Writer using AI to support content creation and assist in website development. A great opportunity for those interested in digital marketing, web content, and technology. 🧰 Key Responsibilities: 📝 Content Creation & Coordination: Assist in writing, editing, and optimizing blog posts, web pages, and marketing copy. Ensure consistency in tone and branding across digital channels. Learn and explore AI tools for smarter content creation (training provided). 🖱️ Website Maintenance & Testing: Perform basic testing of website functionality, responsiveness, and layout. Spot and report bugs or issues on the site. Help manage content using platforms like WordPress or Shopify. 🔎 Research & Engagement: Contribute to SEO and content optimization strategies. Stay updated with digital trends and suggest new ideas for content and site improvement. 🎓 Qualifications: Good written and spoken communication skills. Comfortable using a computer; familiarity with CMS is a plus. Detail-oriented and eager to learn. Interest in digital media, marketing, or web tech is appreciated. 🕒 Duration & Perks: Internship Duration: 3 months Type: Full-time Mode: On-site This is a performance-based internship, and based on your performance during the 3-month period, there is a possibility of being offered a full-time role. Education Qualification - Graduate/Under Graduate Stipend: To be discussed ✨ Hands-on exposure to real-world digital projects 🤝 Supportive and collaborative team culture 📧 Send your CV to: hr@unlink-technologies.com
We are seeking a highly skilled Database Architect to join our development team. This role goes beyond traditional database management—you will design scalable, high-performance database systems that seamlessly integrate with AI and ML components. The ideal candidate will have deep expertise in database architecture while also contributing to broader development efforts, including data pipelines for machine learning models, AI-driven analytics, and automated data processing. This position is crucial for building a robust foundation that supports our platform's data-intensive and intelligent features. Responsibilities - Design and architect database schemas, structures, and systems optimized for large-scale data ingestion, storage, and retrieval in a fintech-like environment. - Collaborate with cross-functional teams (including AI/ML engineers, data scientists, and software developers) to integrate databases with ML/AI workflows, ensuring efficient data flow for model training, inference, and real-time analytics. - Develop and maintain data pipelines, ETL processes, and data lakes to support AI-driven automation, reconciliation, and predictive modeling. - Optimize database performance, scalability, and security to handle high-volume, mission-critical data while minimizing latency for AI applications. - Evaluate and implement emerging technologies in databases, big data, and AI integration to enhance system capabilities. - Conduct data modeling, normalization, and validation to ensure data integrity across automated financial processes. - Provide technical guidance and support for development teams on database-related aspects of ML/AI projects, including feature engineering and data preprocessing. - Troubleshoot and resolve complex data issues, ensuring compliance with industry standards for data privacy and security (e.g., GDPR, SOC 2). - Contribute to the overall system architecture, including cloud migrations and hybrid environments. Qualifications - Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering, or a related field. - 7+ years of experience in database architecture and design, with a proven track record in building scalable systems for data-heavy applications. - Strong proficiency in relational databases (e.g., PostgreSQL, MySQL, Oracle) and NoSQL databases (e.g., MongoDB, Cassandra, Redis). - Hands-on experience with cloud-based database services (e.g., AWS RDS, Aurora, DynamoDB; Azure SQL Database; Google Cloud Spanner, BigQuery). - Solid understanding of big data technologies (e.g., Hadoop, Spark, Kafka) and data warehousing solutions (e.g., Snowflake, Redshift). - Demonstrated experience supporting AI/ML development, including integration with frameworks like TensorFlow, PyTorch, or scikit-learn, and handling datasets for model training. - Proficiency in programming languages such as Python, SQL, Java, or Scala for scripting, automation, and data manipulation. - Knowledge of data governance, ETL tools (e.g., Apache Airflow, Talend), and API integrations for AI-enhanced systems. - Excellent problem-solving skills and the ability to work in a fast-paced, agile environment. Preferred Skills and Experience: - Background in fintech, data automation, or SaaS platforms focused on data reconciliation and AI-driven financial automation. - Experience with AI/ML-specific data tools, such as vector databases (e.g., Pinecone, Milvus) for embedding storage or graph databases for relationship modeling. - Familiarity with containerization and orchestration (e.g., Docker, Kubernetes) for deploying database solutions in ML pipelines. - Certifications in database management (e.g., Oracle Certified Professional, AWS Certified Database) or AI/ML (e.g., Google Cloud Professional Data Engineer). - Strong communication skills to articulate technical concepts to non-technical stakeholders.
Title: Data Platform / Database Architect (Postgres + Kafka) — AI‑Ready Data Infrastructure Location: Noida (Hybrid). Remote within IST±3 considered for exceptional candidates. Employment: Full‑time About Us We are building a high‑throughput, audit‑friendly data platform that powers a SaaS for financial data automation and reconciliation. The stack blends OLTP (Postgres), streaming (Kafka/Debezium), and OLAP (ClickHouse/Snowflake/BigQuery), with hooks for AI use‑cases (vector search, feature store, RAG). Role Summary Own the end‑to‑end design and performance of our data platform—from multi‑tenant Postgres schemas to CDC pipelines and analytics stores—while laying the groundwork for AI‑powered product features. What You’ll Do Design multi‑tenant Postgres schemas (partitioning, indexing, normalization, RLS), and define retention/archival strategies. Make Postgres fast and reliable: EXPLAIN/ANALYZE, connection pooling, vacuum/bloat control, query/index tuning, replication. Build event‑streaming/CDC with Kafka/Debezium (topics, partitions, schema registry), and deliver data to ClickHouse/Snowflake/BigQuery. Model analytics layers (star/snowflake), orchestrate jobs (Airflow/Dagster), and implement dbt‑based transformations. Establish observability and SLOs for data: query/queue metrics, tracing, alerting, capacity planning. Implement data security: encryption, masking, tokenization of PII, IAM boundaries; contribute to PCI‑like audit posture. Integrate AI plumbing: vector embeddings (pgvector/Milvus), basic feature‑store patterns (Feast), retrieval pipelines and metadata lineage. Collaborate with backend/ML/product to review designs, coach engineers, write docs/runbooks, and lead migrations. Must‑Have Qualifications 6+ years building high‑scale data platforms with deep PostgreSQL experience (partitioning, advanced indexing, query planning, replication/HA). Hands‑on with Kafka (or equivalent) and Debezium/CDC patterns; schema registry (Avro/Protobuf) and exactly‑once/at‑least‑once tradeoffs. One or more analytics engines at scale: ClickHouse, Snowflake, or BigQuery, plus strong SQL. Python for data tooling (pydantic, SQLAlchemy, or similar); orchestration with Airflow or Dagster; transformations with dbt. Solid cloud experience (AWS/GCP/Azure)—networking, security groups/IAM, secrets management, cost controls. Pragmatic performance engineering mindset; excellent communication and documentation. Nice‑to‑Have Vector/semantic search (pgvector/Milvus/Pinecone), feature store (Feast), or RAG data pipelines. Experience in fintech‑style domains (reconciliation, ledgers, payments) and SOX/PCI‑like controls. Infra‑as‑Code (Terraform), containerized services (Docker/K8s), and observability stacks (Prometheus/Grafana/OpenTelemetry). Exposure to Go/Java for stream processors/consumers. Lakehouse formats (Delta/Iceberg/Hudi). Skills:- PostgreSQL, Apache Kafka, CI/CD, Apache Airflow, Slowly changing dimensions, Artificial Intelligence (AI) and Machine Learning (ML)
As a Data Platform / Database Architect specializing in Postgres and Kafka, you will play a crucial role in designing and optimizing a high-throughput, audit-friendly data platform supporting a SaaS for financial data automation and reconciliation. Your responsibilities will encompass owning the end-to-end design and performance of the data platform, including multitenant Postgres schemas, CDC pipelines, and analytics stores. Additionally, you will be instrumental in laying the groundwork for AI-powered product features. Key Responsibilities: - Design multitenant Postgres schemas with a focus on partitioning, indexing, normalization, and RLS, while defining retention and archival strategies. - Optimize Postgres performance by implementing strategies such as EXPLAIN/ANALYZE, connection pooling, vacuum/bloat control, query/index tuning, and replication. - Develop event streaming/CDC using Kafka/Debezium, including topics, partitions, schema registry setup, and data delivery to analytics engines like ClickHouse, Snowflake, and BigQuery. - Model analytics layers using star/snowflake schemas, orchestrate jobs with tools like Airflow/Dagster, and implement dbt-based transformations. - Establish observability and SLOs for data by monitoring query/queue metrics, implementing tracing, setting up alerting systems, and conducting capacity planning. - Implement data security measures including encryption, masking, tokenization of PII, and defining IAM boundaries to enhance the audit posture, such as PCI compliance. - Integrate AI components like vector embeddings (pgvector/Milvus), basic feature store patterns (Feast), retrieval pipelines, and metadata lineage. - Collaborate with backend, ML, and product teams to review designs, coach engineers, create documentation/runbooks, and lead migrations. Qualifications Required: - Minimum of 6 years of experience in building high-scale data platforms with in-depth expertise in PostgreSQL, covering areas like partitioning, advanced indexing, query planning, and replication/HA. - Hands-on experience with Kafka (or equivalent) and Debezium/CDC patterns, familiarity with schema registry setups (Avro/Protobuf), and understanding of exactly-once/at-least-once delivery semantics. - Proficiency in at least one analytics engine at scale such as ClickHouse, Snowflake, or BigQuery, along with strong SQL skills. - Working knowledge of Python for data tooling (e.g., pydantic, SQLAlchemy), experience in orchestration with Airflow or Dagster, and expertise in implementing transformations using dbt. - Solid experience in cloud environments like AWS/GCP/Azure, including networking, security groups/IAM, secrets management, and cost controls. - A pragmatic approach to performance engineering, excellent communication skills, and a knack for thorough documentation.,