首页
学习
活动
专区
圈层
工具
发布
    • 综合排序
    • 最热优先
    • 最新优先
    时间不限
  • 来自专栏Piper蛋窝

    Hello, Approximate!

    以下概括都是基于我个人的理解,可能有误,欢迎交流:piperliu@qq.com。

    58830发布于 2020-11-19
  • 来自专栏杨熹的专栏

    强化学习 8: approximate reinforcement learning

    人类肯定不是这样学习的,我们有概括能力,所以也想让强化学习算法具有这样的能力,这时就可以用approximate reinforcement learning ?

    62210发布于 2018-11-21
  • 来自专栏懒人开发

    (7.7)James Stewart Calculus 5th Edition:Approximate Integration

    ---- Approximate Integration 近似积分 黎曼求和,我们把对应的[a, b]分成n份,每份大概为 Δx = (b - a)/n 这个时候,有: ?

    85030发布于 2018-09-12
  • 来自专栏许唯宇

    第六章第二十二题(数学:平方根的近似求法)(Math: approximate the square root) - 编程练习题答案

    public static double sqrt(long n) **6.22(Math: approximate the square root) There are several techniques

    34310编辑于 2022-03-29
  • 来自专栏喵叔's 专栏

    Q1:如何用 C# 计算相对时间 ?

    . /// When on, approximate to the largest round unit of time. /// public static string ToRelativeDateString (this DateTime value, bool approximate) { StringBuilder sb = new StringBuilder(); string suffix "days" : "day"); if (approximate) return sb.ToString() + suffix; } if (timeSpan.Hours "hours" : "hour"); if (approximate) return sb.ToString() + suffix; } if (timeSpan.Minutes "seconds" : "second"); if (approximate) return sb.ToString() + suffix; } if (sb.Length

    59020发布于 2021-06-25
  • 来自专栏计算机视觉战队

    技术 | 用二进制算法加速神经网络

    Approximate Power of Two Shifting Often in deep learning we need to scale values such as reducing the These multiplications can be replaced with approximate power of two binary shifts. For example suppose we want to compute the approximate value of 7*5, ? where AP2 is the approximate power of two operator and << is a left binary shift. This is appealing for two reasons: 1) approximate powers of two can be computed extremely efficiently

    80770发布于 2018-04-17
  • 来自专栏老齐教室

    入门篇:Python里的数

    datetime.now() >>> print("It took", after-before) >>> print("Size of output", len(str(res))) >>> print("Approximate value", float(res)) {<class 'int'>} It took 0:01:16.033260 Size of output 90676 Approximate value 2.7092582487972945 datetime.now() >>> print("It took", after-before) >>> print("Size of output", len(str(res))) >>> print("Approximate value", float(res)) {<class 'int'>} It took 0:00:00.000480 Size of output 17 Approximate value 2.709258248797317 Approximate value 2.7092582487972945 Approximate value 2.709258248797317 1234567891234

    91031发布于 2020-05-15
  • 来自专栏用户2442861的专栏

    python数字图像处理(17):边缘与轮廓

    2、逼近多边形曲线 逼近多边形曲线有两个函数:subdivide_polygon()和 approximate_polygon() subdivide_polygon()采用B样条(B-Splines approximate_polygon()是基于Douglas-Peucker算法的一种近似曲线模拟。它根据指定的容忍值来近似一条多边形曲线链,该曲线也在凸包线的内部。 函数格式为: skimage.measure.approximate_polygon(coords, tolerance) coords: 坐标点序列 tolerance: 容忍值 返回近似的多边形曲线坐标序列 new_hand = hand.copy() for _ in range(5): new_hand =measure.subdivide_polygon(new_hand, degree=2) # approximate subdivided polygon with Douglas-Peucker algorithm appr_hand =measure.approximate_polygon(new_hand, tolerance

    2.1K10发布于 2018-09-19
  • 来自专栏杨丝儿的小站

    MOB LEC8 Recursive and Kalman Filter

    Use approximate nonlinear Bayesian filters include EKF, approximate grid-based methods and particle filters Use approximate grid-based filters and particle filters for non-Gaussian cases. ---- Origin: Dr.

    38310编辑于 2022-11-10
  • 来自专栏hotarugaliの技术分享

    Introduction_Of_Convex_Optimization

    Starting with a nonconvex problem, we first find an approximate, but convex, formulation of the problem By solving this approximate problem, which can be done easily and without an initial guess, we obtain the the exact solution to the approximate convex problem. \qquadAnother broad example is given by randomized algorithms, in which an approximate solution to a drawing some number of candidates from a probability distribution, and taking the best one found as the approximate

    83510编辑于 2022-03-18
  • Apache Doris 4.0 把 AI 塞进数据库了!?

    我没接话,先把用户行为表、商品文本表、图像特征表一股脑倒进 Doris,顺手建了个向量索引: # 向量索引检索函数介绍 l2_distance_approximate(): 使用 HNSW 索引按 欧氏距离 inner_product_approximate(): 使用 HNSW 索引按 内积(Inner Product) 近似计算相似度。数值越大越相似。 占位符 SELECTid, l2_distance_approximate(embedding, [...]) 占位符 SELECTid, title, l2_distance_approximate(embedding, [...]) 占位符 SELECTCOUNT(*) FROM doc_store WHERE l2_distance_approximate(embedding, [...]) <= 0.35; 维度 768,量化

    17010编辑于 2026-02-02
  • 来自专栏算法和应用

    近似模型计数,Sparse XOR约束和最小距离

    原文标题:Approximate Model Counting, Sparse XOR Constraints and Minimum Distance 原文摘要:The problem of counting For this reason, many approximate counters have been developed in the last decade, offering formal guarantees findings, we finally discuss possible directions for improvements of the current state of the art in approximate

    73230发布于 2019-07-18
  • 来自专栏hsdoifh biuwedsy

    Data linkage and privacy

    we also need to add some random record. adding salt does not help two parties protocol doesnt help approximate linkage protocols -understand the steps of the 3 party protocol for privacy preserving data linkage with approximate strings based on 2-grams and why this method is useful Similar match using 2-grams to calculate the approximate common 2-grams)/ (total number of 2-grams in both string) easy comparison and effective method approximate browser Much less space than maintaining a full database of URLs Comparing two strings for approximate

    53030发布于 2021-05-19
  • 来自专栏PingCAP的专栏

    TiDB 6.0 Placement Rules In SQL 使用实践

    (MB): 1APPROXIMATE_KEYS: 01 row in set (0.00 sec)2. (MB): 1APPROXIMATE_KEYS: 01 row in set (0.00 sec)(root\@127.0.0.1) \[test] 12:05:16> alter table jian2 (MB): 1APPROXIMATE_KEYS: 01 row in set (0.00 sec)3. (MB): 1APPROXIMATE_KEYS: 01 row in set (0.00 sec)(root\@127.0.0.1) \[test] 12:10:44> alter PLACEMENT (MB): 1APPROXIMATE_KEYS: 01 row in set (0.02 sec)图片图片4.

    70230编辑于 2022-08-04
  • 来自专栏小七的各种胡思乱想

    Tree - XGBoost with parameter description

    In other words it tries to use linear function to approximate target function and find the direction Following this logic, if we use second order polynomial to approximate target function, we shall get base learner to approximate an unbiased final prediction. Therefore an approximate approach can be used. And optimize the block size to make sure the stats can fit into CPU cache for approximate algo.

    1K10发布于 2019-09-08
  • 来自专栏七云博客

    传说中的死亡ping

    172.168.200.2 bytes=32 time<10ms Ping statistics for 172.168.200.2 Packets Sent=4 Received=4 Lost=0 0% loss Approximate 172.168.6.1 bytes=32 time=9ms TTL=255 Ping statistics for 172.168.6.1 Packets Sent=4 Received=4 Lost=0 Approximate bytes=32 time=6ms TTL=252 Ping statistics for 202.102.48.141 Packets Sent=4 Received=4 Lost=0 0% loss Approximate

    1.1K10编辑于 2022-01-21
  • 来自专栏frytea

    Cisco PT 案例九:单臂路由

    TTL=255 Ping statistics for 192.168.1.254: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate TTL=128 Ping statistics for 192.168.1.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate TTL=255 Ping statistics for 192.168.2.254: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate TTL=127 Ping statistics for 192.168.2.1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate TTL=127 Ping statistics for 192.168.2.2: Packets: Sent = 4, Received = 3, Lost = 1 (25% loss), Approximate

    1.5K10发布于 2020-07-15
  • 来自专栏CreateAMind

    GANs, mutual information, and possibly algorithm selection?

    What lessons can we learn from GAN for better approximate inference (which is my thesis topic)? it, e.g. in this paper the authors connected beta-divergences to tweedie distributions and performed approximate

    35530发布于 2018-07-25
  • 来自专栏CreateAMind

    Faster R-CNN

    Approximate joint training:这里与前一种方法不同,不再是串行训练RPN和Fast-RCNN,而是尝试把二者融入到一个网络内,具体融合的网络结构如下图所示,可以看到,proposals 需要注意的一点,名字中的"approximate"是因为反向传播阶段RPN产生的cls score能够获得梯度用以更新参数,但是proposal的坐标预测则直接把梯度舍弃了,这个设置可以使backward Non-approximate training:上面的Approximate joint training把proposal的坐标预测梯度直接舍弃,所以被称作approximate,那么理论上如果不舍弃是不是能更好的提升 作者把这种训练方式称为“ Non-approximate joint training”,但是此方法在paper中只是一笔带过,表示“This is a nontrivial problem and a

    65320发布于 2018-07-24
  • Doris 4.x AI:一站式搞定文本/向量搜索+智能分析!

    用于控制量化方式和内存占用 其他如 max_degree、ef_construction 等参数用于控制 HNSW 图结构和构建性能 相似度函数 Doris 提供近似相似度计算函数: l2_distance_approximate ():基于欧氏距离(L2),值越小越相似 inner_product_approximate():基于内积,值越大越相似 混合检索与预过滤 Doris 默认采用“先过滤后向量 TopN”的预过滤机制:先利用可精确定位的索引 DISTRIBUTEDBYHASH(id) BUCKETS 16 PROPERTIES("replication_num"="1"); TopN 向量检索: SELECT id, l2_distance_approximate doc_store ORDER BY dist ASC LIMIT 10; 混合检索(文本过滤 + 标签过滤 + 向量 TopN): SELECT id, title, l2_distance_approximate SQL 原生调用,降低开发门槛 文本搜索通过 MATCH_* 和 SEARCH(),向量搜索通过 l2_distance_approximate / inner_product_approximate

    51810编辑于 2026-01-27
领券